/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*- * vim: set ts=8 sts=4 et sw=4 tw=99: * This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. *//* * This code implements an incremental mark-and-sweep garbage collector, with * most sweeping carried out in the background on a parallel thread. * * Full vs. zone GC * ---------------- * * The collector can collect all zones at once, or a subset. These types of * collection are referred to as a full GC and a zone GC respectively. * * It is possible for an incremental collection that started out as a full GC to * become a zone GC if new zones are created during the course of the * collection. * * Incremental collection * ---------------------- * * For a collection to be carried out incrementally the following conditions * must be met: * - the collection must be run by calling js::GCSlice() rather than js::GC() * - the GC mode must have been set to JSGC_MODE_INCREMENTAL with * JS_SetGCParameter() * - no thread may have an AutoKeepAtoms instance on the stack * * The last condition is an engine-internal mechanism to ensure that incremental * collection is not carried out without the correct barriers being implemented. * For more information see 'Incremental marking' below. * * If the collection is not incremental, all foreground activity happens inside * a single call to GC() or GCSlice(). However the collection is not complete * until the background sweeping activity has finished. * * An incremental collection proceeds as a series of slices, interleaved with * mutator activity, i.e. running JavaScript code. Slices are limited by a time * budget. The slice finishes as soon as possible after the requested time has * passed. * * Collector states * ---------------- * * The collector proceeds through the following states, the current state being * held in JSRuntime::gcIncrementalState: * * - MarkRoots - marks the stack and other roots * - Mark - incrementally marks reachable things * - Sweep - sweeps zones in groups and continues marking unswept zones * - Finalize - performs background finalization, concurrent with mutator * - Compact - incrementally compacts by zone * - Decommit - performs background decommit and chunk removal * * The MarkRoots activity always takes place in the first slice. The next two * states can take place over one or more slices. * * In other words an incremental collection proceeds like this: * * Slice 1: MarkRoots: Roots pushed onto the mark stack. * Mark: The mark stack is processed by popping an element, * marking it, and pushing its children. * * ... JS code runs ... * * Slice 2: Mark: More mark stack processing. * * ... JS code runs ... * * Slice n-1: Mark: More mark stack processing. * * ... JS code runs ... * * Slice n: Mark: Mark stack is completely drained. * Sweep: Select first group of zones to sweep and sweep them. * * ... JS code runs ... * * Slice n+1: Sweep: Mark objects in unswept zones that were newly * identified as alive (see below). Then sweep more zone * sweep groups. * * ... JS code runs ... * * Slice n+2: Sweep: Mark objects in unswept zones that were newly * identified as alive. Then sweep more zones. * * ... JS code runs ... * * Slice m: Sweep: Sweeping is finished, and background sweeping * started on the helper thread. * * ... JS code runs, remaining sweeping done on background thread ... * * When background sweeping finishes the GC is complete. * * Incremental marking * ------------------- * * Incremental collection requires close collaboration with the mutator (i.e., * JS code) to guarantee correctness. * * - During an incremental GC, if a memory location (except a root) is written * to, then the value it previously held must be marked. Write barriers * ensure this. * * - Any object that is allocated during incremental GC must start out marked. * * - Roots are marked in the first slice and hence don't need write barriers. * Roots are things like the C stack and the VM stack. * * The problem that write barriers solve is that between slices the mutator can * change the object graph. We must ensure that it cannot do this in such a way * that makes us fail to mark a reachable object (marking an unreachable object * is tolerable). * * We use a snapshot-at-the-beginning algorithm to do this. This means that we * promise to mark at least everything that is reachable at the beginning of * collection. To implement it we mark the old contents of every non-root memory * location written to by the mutator while the collection is in progress, using * write barriers. This is described in gc/Barrier.h. * * Incremental sweeping * -------------------- * * Sweeping is difficult to do incrementally because object finalizers must be * run at the start of sweeping, before any mutator code runs. The reason is * that some objects use their finalizers to remove themselves from caches. If * mutator code was allowed to run after the start of sweeping, it could observe * the state of the cache and create a new reference to an object that was just * about to be destroyed. * * Sweeping all finalizable objects in one go would introduce long pauses, so * instead sweeping broken up into groups of zones. Zones which are not yet * being swept are still marked, so the issue above does not apply. * * The order of sweeping is restricted by cross compartment pointers - for * example say that object |a| from zone A points to object |b| in zone B and * neither object was marked when we transitioned to the Sweep phase. Imagine we * sweep B first and then return to the mutator. It's possible that the mutator * could cause |a| to become alive through a read barrier (perhaps it was a * shape that was accessed via a shape table). Then we would need to mark |b|, * which |a| points to, but |b| has already been swept. * * So if there is such a pointer then marking of zone B must not finish before * marking of zone A. Pointers which form a cycle between zones therefore * restrict those zones to being swept at the same time, and these are found * using Tarjan's algorithm for finding the strongly connected components of a * graph. * * GC things without finalizers, and things with finalizers that are able to run * in the background, are swept on the background thread. This accounts for most * of the sweeping work. * * Reset * ----- * * During incremental collection it is possible, although unlikely, for * conditions to change such that incremental collection is no longer safe. In * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in * the mark state, this just stops marking, but if we have started sweeping * already, we continue until we have swept the current sweep group. Following a * reset, a new non-incremental collection is started. * * Compacting GC * ------------- * * Compacting GC happens at the end of a major GC as part of the last slice. * There are three parts: * * - Arenas are selected for compaction. * - The contents of those arenas are moved to new arenas. * - All references to moved things are updated. * * Collecting Atoms * ---------------- * * Atoms are collected differently from other GC things. They are contained in * a special zone and things in other zones may have pointers to them that are * not recorded in the cross compartment pointer map. Each zone holds a bitmap * with the atoms it might be keeping alive, and atoms are only collected if * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how * this bitmap is managed. */#include"jsgcinlines.h"#include"mozilla/ArrayUtils.h"#include"mozilla/DebugOnly.h"#include"mozilla/MacroForEach.h"#include"mozilla/MemoryReporting.h"#include"mozilla/Move.h"#include"mozilla/ScopeExit.h"#include"mozilla/SizePrintfMacros.h"#include"mozilla/TimeStamp.h"#include<ctype.h>#include<string.h>#ifndef XP_WIN# include <sys/mman.h># include <unistd.h>#endif#include"jsapi.h"#include"jsatom.h"#include"jscntxt.h"#include"jscompartment.h"#include"jsfriendapi.h"#include"jsobj.h"#include"jsprf.h"#include"jsscript.h"#include"jstypes.h"#include"jsutil.h"#include"jswatchpoint.h"#include"jsweakmap.h"#ifdef XP_WIN# include "jswin.h"#endif#include"gc/FindSCCs.h"#include"gc/GCInternals.h"#include"gc/GCTrace.h"#include"gc/Marking.h"#include"gc/Memory.h"#include"gc/Policy.h"#include"jit/BaselineJIT.h"#include"jit/IonCode.h"#include"jit/JitcodeMap.h"#include"js/SliceBudget.h"#include"proxy/DeadObjectProxy.h"#include"vm/Debugger.h"#include"vm/GeckoProfiler.h"#include"vm/ProxyObject.h"#include"vm/Shape.h"#include"vm/String.h"#include"vm/Symbol.h"#include"vm/Time.h"#include"vm/TraceLogging.h"#include"vm/WrapperObject.h"#include"jsobjinlines.h"#include"jsscriptinlines.h"#include"gc/Heap-inl.h"#include"gc/Nursery-inl.h"#include"vm/GeckoProfiler-inl.h"#include"vm/Stack-inl.h"#include"vm/String-inl.h"usingnamespacejs;usingnamespacejs::gc;usingmozilla::ArrayLength;usingmozilla::Get;usingmozilla::HashCodeScrambler;usingmozilla::Maybe;usingmozilla::Swap;usingmozilla::TimeStamp;usingJS::AutoGCRooter;/* Increase the IGC marking slice time if we are in highFrequencyGC mode. */staticconstintIGC_MARK_SLICE_MULTIPLIER=2;constAllocKindgc::slotsToThingKind[]={/* 0 */AllocKind::OBJECT0,AllocKind::OBJECT2,AllocKind::OBJECT2,AllocKind::OBJECT4,/* 4 */AllocKind::OBJECT4,AllocKind::OBJECT8,AllocKind::OBJECT8,AllocKind::OBJECT8,/* 8 */AllocKind::OBJECT8,AllocKind::OBJECT12,AllocKind::OBJECT12,AllocKind::OBJECT12,/* 12 */AllocKind::OBJECT12,AllocKind::OBJECT16,AllocKind::OBJECT16,AllocKind::OBJECT16,/* 16 */AllocKind::OBJECT16};static_assert(JS_ARRAY_LENGTH(slotsToThingKind)==SLOTS_TO_THING_KIND_LIMIT,"We have defined a slot count for each kind.");#define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType) \ static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \ #sizedType " is smaller than SortedArenaList::MinThingSize!"); \ static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \ #sizedType " is smaller than FreeSpan"); \ static_assert(sizeof(sizedType) % CellAlignBytes == 0, \ "Size of " #sizedType " is not a multiple of CellAlignBytes"); \ static_assert(sizeof(sizedType) >= MinCellSize, \ "Size of " #sizedType " is smaller than the minimum size");FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);#undef CHECK_THING_SIZEconstuint32_tArena::ThingSizes[]={#define EXPAND_THING_SIZE(allocKind, traceKind, type, sizedType) \ sizeof(sizedType),FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)#undef EXPAND_THING_SIZE};FreeSpanArenaLists::placeholder;#undef CHECK_THING_SIZE_INNER#undef CHECK_THING_SIZE#define OFFSET(type) uint32_t(ArenaHeaderSize + (ArenaSize - ArenaHeaderSize) % sizeof(type))constuint32_tArena::FirstThingOffsets[]={#define EXPAND_FIRST_THING_OFFSET(allocKind, traceKind, type, sizedType) \ OFFSET(sizedType),FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)#undef EXPAND_FIRST_THING_OFFSET};#undef OFFSET#define COUNT(type) uint32_t((ArenaSize - ArenaHeaderSize) / sizeof(type))constuint32_tArena::ThingsPerArena[]={#define EXPAND_THINGS_PER_ARENA(allocKind, traceKind, type, sizedType) \ COUNT(sizedType),FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)#undef EXPAND_THINGS_PER_ARENA};#undef COUNTstructjs::gc::FinalizePhase{gcstats::PhaseKindstatsPhase;AllocKindskinds;};/* * Finalization order for objects swept incrementally on the active thread. */staticconstFinalizePhaseForegroundObjectFinalizePhase={gcstats::PhaseKind::SWEEP_OBJECT,{AllocKind::OBJECT0,AllocKind::OBJECT2,AllocKind::OBJECT4,AllocKind::OBJECT8,AllocKind::OBJECT12,AllocKind::OBJECT16}};/* * Finalization order for GC things swept incrementally on the active thread. */staticconstFinalizePhaseIncrementalFinalizePhases[]={{gcstats::PhaseKind::SWEEP_SCRIPT,{AllocKind::SCRIPT}},{gcstats::PhaseKind::SWEEP_JITCODE,{AllocKind::JITCODE}}};/* * Finalization order for GC things swept on the background thread. */staticconstFinalizePhaseBackgroundFinalizePhases[]={{gcstats::PhaseKind::SWEEP_SCRIPT,{AllocKind::LAZY_SCRIPT}},{gcstats::PhaseKind::SWEEP_OBJECT,{AllocKind::FUNCTION,AllocKind::FUNCTION_EXTENDED,AllocKind::OBJECT0_BACKGROUND,AllocKind::OBJECT2_BACKGROUND,AllocKind::OBJECT4_BACKGROUND,AllocKind::OBJECT8_BACKGROUND,AllocKind::OBJECT12_BACKGROUND,AllocKind::OBJECT16_BACKGROUND}},{gcstats::PhaseKind::SWEEP_SCOPE,{AllocKind::SCOPE,}},{gcstats::PhaseKind::SWEEP_REGEXP_SHARED,{AllocKind::REGEXP_SHARED,}},{gcstats::PhaseKind::SWEEP_STRING,{AllocKind::FAT_INLINE_STRING,AllocKind::STRING,AllocKind::EXTERNAL_STRING,AllocKind::FAT_INLINE_ATOM,AllocKind::ATOM,AllocKind::SYMBOL}},{gcstats::PhaseKind::SWEEP_SHAPE,{AllocKind::SHAPE,AllocKind::ACCESSOR_SHAPE,AllocKind::BASE_SHAPE,AllocKind::OBJECT_GROUP}}};// Incremental sweeping is controlled by a list of actions that describe what// happens and in what order. Due to the incremental nature of sweeping an// action does not necessarily run to completion so the current state is tracked// in the GCRuntime by the performSweepActions() method. We may yield to the// mutator after running part of any action.//// There are two types of action: per-sweep-group and per-zone.//// Per-sweep-group actions are run first. Per-zone actions are grouped into// phases, with each phase run once per sweep group, and each action in it run// for every zone in the group.//// This is illustrated by the following pseudocode://// for each sweep group:// for each per-sweep-group action:// run part or all of action// maybe yield to the mutator// for each per-zone phase:// for each zone in sweep group:// for each action in phase:// run part or all of action// maybe yield to the mutator//// Progress through the loops is stored in GCRuntime, e.g. |sweepActionIndex|// for looping through the sweep actions.usingPerSweepGroupSweepAction=IncrementalProgress(*)(GCRuntime*gc,SliceBudget&budget);structPerZoneSweepAction{usingFunc=IncrementalProgress(*)(GCRuntime*gc,FreeOp*fop,Zone*zone,SliceBudget&budget,AllocKindkind);Funcfunc;AllocKindkind;PerZoneSweepAction(Funcfunc,AllocKindkind):func(func),kind(kind){}};usingPerSweepGroupActionVector=Vector<PerSweepGroupSweepAction,0,SystemAllocPolicy>;usingPerZoneSweepActionVector=Vector<PerZoneSweepAction,0,SystemAllocPolicy>;usingPerZoneSweepPhaseVector=Vector<PerZoneSweepActionVector,0,SystemAllocPolicy>;staticPerSweepGroupActionVectorPerSweepGroupSweepActions;staticPerZoneSweepPhaseVectorPerZoneSweepPhases;booljs::gc::InitializeStaticData(){returnGCRuntime::initializeSweepActions();}template<>JSObject*ArenaCellIterImpl::get<JSObject>()const{MOZ_ASSERT(!done());returnreinterpret_cast<JSObject*>(getCell());}voidArena::unmarkAll(){uintptr_t*word=chunk()->bitmap.arenaBits(this);memset(word,0,ArenaBitmapWords*sizeof(uintptr_t));}/* static */voidArena::staticAsserts(){static_assert(size_t(AllocKind::LIMIT)<=255,"We must be able to fit the allockind into uint8_t.");static_assert(JS_ARRAY_LENGTH(ThingSizes)==size_t(AllocKind::LIMIT),"We haven't defined all thing sizes.");static_assert(JS_ARRAY_LENGTH(FirstThingOffsets)==size_t(AllocKind::LIMIT),"We haven't defined all offsets.");static_assert(JS_ARRAY_LENGTH(ThingsPerArena)==size_t(AllocKind::LIMIT),"We haven't defined all counts.");}template<typenameT>inlinesize_tArena::finalize(FreeOp*fop,AllocKindthingKind,size_tthingSize){/* Enforce requirements on size of T. */MOZ_ASSERT(thingSize%CellAlignBytes==0);MOZ_ASSERT(thingSize>=MinCellSize);MOZ_ASSERT(thingSize<=255);MOZ_ASSERT(allocated());MOZ_ASSERT(thingKind==getAllocKind());MOZ_ASSERT(thingSize==getThingSize());MOZ_ASSERT(!hasDelayedMarking);MOZ_ASSERT(!markOverflow);MOZ_ASSERT(!allocatedDuringIncremental);uint_fast16_tfirstThing=firstThingOffset(thingKind);uint_fast16_tfirstThingOrSuccessorOfLastMarkedThing=firstThing;uint_fast16_tlastThing=ArenaSize-thingSize;FreeSpannewListHead;FreeSpan*newListTail=&newListHead;size_tnmarked=0;if(MOZ_UNLIKELY(MemProfiler::enabled())){for(ArenaCellIterUnderFinalizei(this);!i.done();i.next()){T*t=i.get<T>();if(t->asTenured().isMarked())MemProfiler::MarkTenured(reinterpret_cast<void*>(t));}}for(ArenaCellIterUnderFinalizei(this);!i.done();i.next()){T*t=i.get<T>();if(t->asTenured().isMarked()){uint_fast16_tthing=uintptr_t(t)&ArenaMask;if(thing!=firstThingOrSuccessorOfLastMarkedThing){// We just finished passing over one or more free things,// so record a new FreeSpan.newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,thing-thingSize,this);newListTail=newListTail->nextSpanUnchecked(this);}firstThingOrSuccessorOfLastMarkedThing=thing+thingSize;nmarked++;}else{t->finalize(fop);JS_POISON(t,JS_SWEPT_TENURED_PATTERN,thingSize);TraceTenuredFinalize(t);}}if(nmarked==0){// Do nothing. The caller will update the arena appropriately.MOZ_ASSERT(newListTail==&newListHead);JS_EXTRA_POISON(data,JS_SWEPT_TENURED_PATTERN,sizeof(data));returnnmarked;}MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing!=firstThing);uint_fast16_tlastMarkedThing=firstThingOrSuccessorOfLastMarkedThing-thingSize;if(lastThing==lastMarkedThing){// If the last thing was marked, we will have already set the bounds of// the final span, and we just need to terminate the list.newListTail->initAsEmpty();}else{// Otherwise, end the list with a span that covers the final stretch of free things.newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing,lastThing,this);}firstFreeSpan=newListHead;#ifdef DEBUGsize_tnfree=numFreeThings(thingSize);MOZ_ASSERT(nfree+nmarked==thingsPerArena(thingKind));#endifreturnnmarked;}// Finalize arenas from src list, releasing empty arenas if keepArenas wasn't// specified and inserting the others into the appropriate destination size// bins.template<typenameT>staticinlineboolFinalizeTypedArenas(FreeOp*fop,Arena**src,SortedArenaList&dest,AllocKindthingKind,SliceBudget&budget,ArenaLists::KeepArenasEnumkeepArenas){// When operating in the foreground, take the lock at the top.Maybe<AutoLockGC>maybeLock;if(fop->onActiveCooperatingThread())maybeLock.emplace(fop->runtime());// During background sweeping free arenas are released later on in// sweepBackgroundThings().MOZ_ASSERT_IF(!fop->onActiveCooperatingThread(),keepArenas==ArenaLists::KEEP_ARENAS);size_tthingSize=Arena::thingSize(thingKind);size_tthingsPerArena=Arena::thingsPerArena(thingKind);while(Arena*arena=*src){*src=arena->next;size_tnmarked=arena->finalize<T>(fop,thingKind,thingSize);size_tnfree=thingsPerArena-nmarked;if(nmarked)dest.insertAt(arena,nfree);elseif(keepArenas==ArenaLists::KEEP_ARENAS)arena->chunk()->recycleArena(arena,dest,thingsPerArena);elsefop->runtime()->gc.releaseArena(arena,maybeLock.ref());budget.step(thingsPerArena);if(budget.isOverBudget())returnfalse;}returntrue;}/* * Finalize the list. On return, |al|'s cursor points to the first non-empty * arena in the list (which may be null if all arenas are full). */staticboolFinalizeArenas(FreeOp*fop,Arena**src,SortedArenaList&dest,AllocKindthingKind,SliceBudget&budget,ArenaLists::KeepArenasEnumkeepArenas){switch(thingKind){#define EXPAND_CASE(allocKind, traceKind, type, sizedType) \ case AllocKind::allocKind: \ return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget, keepArenas);FOR_EACH_ALLOCKIND(EXPAND_CASE)#undef EXPAND_CASEdefault:MOZ_CRASH("Invalid alloc kind");}}Chunk*ChunkPool::pop(){MOZ_ASSERT(bool(head_)==bool(count_));if(!count_)returnnullptr;returnremove(head_);}voidChunkPool::push(Chunk*chunk){MOZ_ASSERT(!chunk->info.next);MOZ_ASSERT(!chunk->info.prev);chunk->info.next=head_;if(head_)head_->info.prev=chunk;head_=chunk;++count_;MOZ_ASSERT(verify());}Chunk*ChunkPool::remove(Chunk*chunk){MOZ_ASSERT(count_>0);MOZ_ASSERT(contains(chunk));if(head_==chunk)head_=chunk->info.next;if(chunk->info.prev)chunk->info.prev->info.next=chunk->info.next;if(chunk->info.next)chunk->info.next->info.prev=chunk->info.prev;chunk->info.next=chunk->info.prev=nullptr;--count_;MOZ_ASSERT(verify());returnchunk;}#ifdef DEBUGboolChunkPool::contains(Chunk*chunk)const{verify();for(Chunk*cursor=head_;cursor;cursor=cursor->info.next){if(cursor==chunk)returntrue;}returnfalse;}boolChunkPool::verify()const{MOZ_ASSERT(bool(head_)==bool(count_));uint32_tcount=0;for(Chunk*cursor=head_;cursor;cursor=cursor->info.next,++count){MOZ_ASSERT_IF(cursor->info.prev,cursor->info.prev->info.next==cursor);MOZ_ASSERT_IF(cursor->info.next,cursor->info.next->info.prev==cursor);}MOZ_ASSERT(count_==count);returntrue;}#endifvoidChunkPool::Iter::next(){MOZ_ASSERT(!done());current_=current_->info.next;}ChunkPoolGCRuntime::expireEmptyChunkPool(constAutoLockGC&lock){MOZ_ASSERT(emptyChunks(lock).verify());MOZ_ASSERT(tunables.minEmptyChunkCount(lock)<=tunables.maxEmptyChunkCount());ChunkPoolexpired;while(emptyChunks(lock).count()>tunables.minEmptyChunkCount(lock)){Chunk*chunk=emptyChunks(lock).pop();prepareToFreeChunk(chunk->info);expired.push(chunk);}MOZ_ASSERT(expired.verify());MOZ_ASSERT(emptyChunks(lock).verify());MOZ_ASSERT(emptyChunks(lock).count()<=tunables.maxEmptyChunkCount());MOZ_ASSERT(emptyChunks(lock).count()<=tunables.minEmptyChunkCount(lock));returnexpired;}staticvoidFreeChunkPool(JSRuntime*rt,ChunkPool&pool){for(ChunkPool::Iteriter(pool);!iter.done();){Chunk*chunk=iter.get();iter.next();pool.remove(chunk);MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);UnmapPages(static_cast<void*>(chunk),ChunkSize);}MOZ_ASSERT(pool.count()==0);}voidGCRuntime::freeEmptyChunks(JSRuntime*rt,constAutoLockGC&lock){FreeChunkPool(rt,emptyChunks(lock));}inlinevoidGCRuntime::prepareToFreeChunk(ChunkInfo&info){MOZ_ASSERT(numArenasFreeCommitted>=info.numArenasFreeCommitted);numArenasFreeCommitted-=info.numArenasFreeCommitted;stats().count(gcstats::STAT_DESTROY_CHUNK);#ifdef DEBUG/* * Let FreeChunkPool detect a missing prepareToFreeChunk call before it * frees chunk. */info.numArenasFreeCommitted=0;#endif}inlinevoidGCRuntime::updateOnArenaFree(constChunkInfo&info){++numArenasFreeCommitted;}voidChunk::addArenaToFreeList(JSRuntime*rt,Arena*arena){MOZ_ASSERT(!arena->allocated());arena->next=info.freeArenasHead;info.freeArenasHead=arena;++info.numArenasFreeCommitted;++info.numArenasFree;rt->gc.updateOnArenaFree(info);}voidChunk::addArenaToDecommittedList(JSRuntime*rt,constArena*arena){++info.numArenasFree;decommittedArenas.set(Chunk::arenaIndex(arena->address()));}voidChunk::recycleArena(Arena*arena,SortedArenaList&dest,size_tthingsPerArena){arena->setAsFullyUnused();dest.insertAt(arena,thingsPerArena);}voidChunk::releaseArena(JSRuntime*rt,Arena*arena,constAutoLockGC&lock){MOZ_ASSERT(arena->allocated());MOZ_ASSERT(!arena->hasDelayedMarking);arena->release();addArenaToFreeList(rt,arena);updateChunkListAfterFree(rt,lock);}boolChunk::decommitOneFreeArena(JSRuntime*rt,AutoLockGC&lock){MOZ_ASSERT(info.numArenasFreeCommitted>0);Arena*arena=fetchNextFreeArena(rt);updateChunkListAfterAlloc(rt,lock);boolok;{AutoUnlockGCunlock(lock);ok=MarkPagesUnused(arena,ArenaSize);}if(ok)addArenaToDecommittedList(rt,arena);elseaddArenaToFreeList(rt,arena);updateChunkListAfterFree(rt,lock);returnok;}voidChunk::decommitAllArenasWithoutUnlocking(constAutoLockGC&lock){for(size_ti=0;i<ArenasPerChunk;++i){if(decommittedArenas.get(i)||arenas[i].allocated())continue;if(MarkPagesUnused(&arenas[i],ArenaSize)){info.numArenasFreeCommitted--;decommittedArenas.set(i);}}}voidChunk::updateChunkListAfterAlloc(JSRuntime*rt,constAutoLockGC&lock){if(MOZ_UNLIKELY(!hasAvailableArenas())){rt->gc.availableChunks(lock).remove(this);rt->gc.fullChunks(lock).push(this);}}voidChunk::updateChunkListAfterFree(JSRuntime*rt,constAutoLockGC&lock){if(info.numArenasFree==1){rt->gc.fullChunks(lock).remove(this);rt->gc.availableChunks(lock).push(this);}elseif(!unused()){MOZ_ASSERT(!rt->gc.fullChunks(lock).contains(this));MOZ_ASSERT(rt->gc.availableChunks(lock).contains(this));MOZ_ASSERT(!rt->gc.emptyChunks(lock).contains(this));}else{MOZ_ASSERT(unused());rt->gc.availableChunks(lock).remove(this);decommitAllArenas(rt);MOZ_ASSERT(info.numArenasFreeCommitted==0);rt->gc.recycleChunk(this,lock);}}voidGCRuntime::releaseArena(Arena*arena,constAutoLockGC&lock){arena->zone->usage.removeGCArena();if(isBackgroundSweeping())arena->zone->threshold.updateForRemovedArena(tunables);returnarena->chunk()->releaseArena(rt,arena,lock);}GCRuntime::GCRuntime(JSRuntime*rt):rt(rt),systemZone(nullptr),systemZoneGroup(nullptr),atomsZone(nullptr),stats_(rt),marker(rt),usage(nullptr),mMemProfiler(rt),nextCellUniqueId_(LargestTaggedNullCellPointer+1),// Ensure disjoint from null tagged pointers.numArenasFreeCommitted(0),verifyPreData(nullptr),chunkAllocationSinceLastGC(false),lastGCTime(PRMJ_Now()),mode(JSGC_MODE_INCREMENTAL),numActiveZoneIters(0),cleanUpEverything(false),grayBufferState(GCRuntime::GrayBufferState::Unused),grayBitsValid(false),majorGCTriggerReason(JS::gcreason::NO_REASON),fullGCForAtomsRequested_(false),minorGCNumber(0),majorGCNumber(0),jitReleaseNumber(0),number(0),isFull(false),incrementalState(gc::State::NotActive),lastMarkSlice(false),sweepOnBackgroundThread(false),blocksToFreeAfterSweeping((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),sweepGroupIndex(0),sweepGroups(nullptr),currentSweepGroup(nullptr),sweepPhaseIndex(0),sweepZone(nullptr),sweepActionIndex(0),abortSweepAfterCurrentGroup(false),arenasAllocatedDuringSweep(nullptr),startedCompacting(false),relocatedArenasToRelease(nullptr),#ifdef JS_GC_ZEALmarkingValidator(nullptr),#endifinterFrameGC(false),defaultTimeBudget_((int64_t)SliceBudget::UnlimitedTimeBudget),incrementalAllowed(true),compactingEnabled(true),poked(false),#ifdef JS_GC_ZEALzealModeBits(0),zealFrequency(0),nextScheduled(0),deterministicOnly(false),incrementalLimit(0),#endiffullCompartmentChecks(false),alwaysPreserveCode(false),#ifdef DEBUGarenasEmptyAtShutdown(true),#endiflock(mutexid::GCLock),allocTask(rt,emptyChunks_.ref()),decommitTask(rt),helperState(rt),nursery_(rt),storeBuffer_(rt,nursery()),blocksToFreeAfterMinorGC((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE){setGCMode(JSGC_MODE_GLOBAL);}#ifdef JS_GC_ZEALvoidGCRuntime::getZealBits(uint32_t*zealBits,uint32_t*frequency,uint32_t*scheduled){*zealBits=zealModeBits;*frequency=zealFrequency;*scheduled=nextScheduled;}constchar*gc::ZealModeHelpText=" Specifies how zealous the garbage collector should be. Some of these modes can\n"" be set simultaneously, by passing multiple level options, e.g. \"2;4\" will activate\n"" both modes 2 and 4. Modes can be specified by name or number.\n"" \n"" Values:\n"" 0: (None) Normal amount of collection (resets all modes)\n"" 1: (Poke) Collect when roots are added or removed\n"" 2: (Alloc) Collect when every N allocations (default: 100)\n"" 3: (FrameGC) Collect when the window paints (browser only)\n"" 4: (VerifierPre) Verify pre write barriers between instructions\n"" 5: (FrameVerifierPre) Verify pre write barriers between paints\n"" 6: (StackRooting) Verify stack rooting\n"" 7: (GenerationalGC) Collect the nursery every N nursery allocations\n"" 8: (IncrementalRootsThenFinish) Incremental GC in two slices: 1) mark roots 2) finish collection\n"" 9: (IncrementalMarkAllThenFinish) Incremental GC in two slices: 1) mark all 2) new marking and finish\n"" 10: (IncrementalMultipleSlices) Incremental GC in multiple slices\n"" 11: (IncrementalMarkingValidator) Verify incremental marking\n"" 12: (ElementsBarrier) Always use the individual element post-write barrier, regardless of elements size\n"" 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"" 14: (Compact) Perform a shrinking collection every N allocations\n"" 15: (CheckHeapAfterGC) Walk the heap to check its integrity after every GC\n"" 16: (CheckNursery) Check nursery integrity on minor GC\n"" 17: (IncrementalSweepThenFinish) Incremental GC in two slices: 1) start sweeping 2) finish collection\n";// The set of zeal modes that control incremental slices. These modes are// mutually exclusive.staticconstmozilla::EnumSet<ZealMode>IncrementalSliceZealModes={ZealMode::IncrementalRootsThenFinish,ZealMode::IncrementalMarkAllThenFinish,ZealMode::IncrementalMultipleSlices,ZealMode::IncrementalSweepThenFinish};voidGCRuntime::setZeal(uint8_tzeal,uint32_tfrequency){MOZ_ASSERT(zeal<=unsigned(ZealMode::Limit));if(verifyPreData)VerifyBarriers(rt,PreBarrierVerifier);if(zeal==0){if(hasZealMode(ZealMode::GenerationalGC)){evictNursery(JS::gcreason::DEBUG_GC);nursery().leaveZealMode();}if(isIncrementalGCInProgress())finishGC(JS::gcreason::DEBUG_GC);}ZealModezealMode=ZealMode(zeal);if(zealMode==ZealMode::GenerationalGC){for(ZoneGroupsItergroup(rt);!group.done();group.next())group->nursery().enterZealMode();}// Some modes are mutually exclusive. If we're setting one of those, we// first reset all of them.if(IncrementalSliceZealModes.contains(zealMode)){for(automode:IncrementalSliceZealModes)clearZealMode(mode);}boolschedule=zealMode>=ZealMode::Alloc;if(zeal!=0)zealModeBits|=1<<unsigned(zeal);elsezealModeBits=0;zealFrequency=frequency;nextScheduled=schedule?frequency:0;}voidGCRuntime::setNextScheduled(uint32_tcount){nextScheduled=count;}boolGCRuntime::parseAndSetZeal(constchar*str){intfrequency=-1;boolfoundFrequency=false;mozilla::Vector<int,0,SystemAllocPolicy>zeals;staticconststruct{constchar*constzealMode;size_tlength;uint32_tzeal;}zealModes[]={#define ZEAL_MODE(name, value) {#name, sizeof(#name) - 1, value},JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)#undef ZEAL_MODE{"None",4,0}};do{intzeal=-1;constchar*p=nullptr;if(isdigit(str[0])){zeal=atoi(str);size_toffset=strspn(str,"0123456789");p=str+offset;}else{for(autoz:zealModes){if(!strncmp(str,z.zealMode,z.length)){zeal=z.zeal;p=str+z.length;break;}}}if(p){if(!*p||*p==';'){frequency=JS_DEFAULT_ZEAL_FREQ;}elseif(*p==','){frequency=atoi(p+1);foundFrequency=true;}}if(zeal<0||zeal>int(ZealMode::Limit)||frequency<=0){fprintf(stderr,"Format: JS_GC_ZEAL=level(;level)*[,N]\n");fputs(ZealModeHelpText,stderr);returnfalse;}if(!zeals.emplaceBack(zeal)){returnfalse;}}while(!foundFrequency&&(str=strchr(str,';'))!=nullptr&&str++);for(autoz:zeals)setZeal(z,frequency);returntrue;}staticconstchar*AllocKindName(AllocKindkind){staticconstchar*names[]={#define EXPAND_THING_NAME(allocKind, _1, _2, _3) \ #allocKind,FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)#undef EXPAND_THING_NAME};static_assert(ArrayLength(names)==size_t(AllocKind::LIMIT),"names array should have an entry for every AllocKind");size_ti=size_t(kind);MOZ_ASSERT(i<ArrayLength(names));returnnames[i];}voidjs::gc::DumpArenaInfo(){fprintf(stderr,"Arena header size: %"PRIuSIZE"\n\n",ArenaHeaderSize);fprintf(stderr,"GC thing kinds:\n");fprintf(stderr,"%25s %8s %8s %8s\n","AllocKind:","Size:","Count:","Padding:");for(autokind:AllAllocKinds()){fprintf(stderr,"%25s %8"PRIuSIZE" %8"PRIuSIZE" %8"PRIuSIZE"\n",AllocKindName(kind),Arena::thingSize(kind),Arena::thingsPerArena(kind),Arena::firstThingOffset(kind)-ArenaHeaderSize);}}#endif // JS_GC_ZEAL/* * Lifetime in number of major GCs for type sets attached to scripts containing * observed types. */staticconstuint64_tJIT_SCRIPT_RELEASE_TYPES_PERIOD=20;boolGCRuntime::init(uint32_tmaxbytes,uint32_tmaxNurseryBytes){MOZ_ASSERT(SystemPageSize());if(!rootsHash.ref().init(256))returnfalse;{AutoLockGClock(rt);/* * Separate gcMaxMallocBytes from gcMaxBytes but initialize to maxbytes * for default backward API compatibility. */MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES,maxbytes,lock));MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_NURSERY_BYTES,maxNurseryBytes,lock));setMaxMallocBytes(maxbytes);constchar*size=getenv("JSGC_MARK_STACK_LIMIT");if(size)setMarkStackLimit(atoi(size),lock);jitReleaseNumber=majorGCNumber+JIT_SCRIPT_RELEASE_TYPES_PERIOD;if(!nursery().init(maxNurseryBytes,lock))returnfalse;}#ifdef JS_GC_ZEALconstchar*zealSpec=getenv("JS_GC_ZEAL");if(zealSpec&&zealSpec[0]&&!parseAndSetZeal(zealSpec))returnfalse;#endifif(!InitTrace(*this))returnfalse;if(!marker.init(mode))returnfalse;returntrue;}voidGCRuntime::finish(){/* Wait for the nursery sweeping to end. */for(ZoneGroupsItergroup(rt);!group.done();group.next()){if(group->nursery().isEnabled())group->nursery().waitBackgroundFreeEnd();}/* * Wait until the background finalization and allocation stops and the * helper thread shuts down before we forcefully release any remaining GC * memory. */helperState.finish();allocTask.cancel(GCParallelTask::CancelAndWait);decommitTask.cancel(GCParallelTask::CancelAndWait);#ifdef JS_GC_ZEAL/* Free memory associated with GC verification. */finishVerifier();#endif/* Delete all remaining zones. */if(rt->gcInitialized){AutoSetThreadIsSweepingthreadIsSweeping;for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){for(CompartmentsInZoneItercomp(zone);!comp.done();comp.next())js_delete(comp.get());js_delete(zone.get());}}groups.ref().clear();FreeChunkPool(rt,fullChunks_.ref());FreeChunkPool(rt,availableChunks_.ref());FreeChunkPool(rt,emptyChunks_.ref());FinishTrace();for(ZoneGroupsItergroup(rt);!group.done();group.next())group->nursery().printTotalProfileTimes();stats().printTotalProfileTimes();}boolGCRuntime::setParameter(JSGCParamKeykey,uint32_tvalue,AutoLockGC&lock){switch(key){caseJSGC_MAX_MALLOC_BYTES:setMaxMallocBytes(value);for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next())zone->setGCMaxMallocBytes(maxMallocBytesAllocated()*0.9);break;caseJSGC_SLICE_TIME_BUDGET:defaultTimeBudget_=value?value:SliceBudget::UnlimitedTimeBudget;break;caseJSGC_MARK_STACK_LIMIT:if(value==0)returnfalse;setMarkStackLimit(value,lock);break;caseJSGC_MODE:if(mode!=JSGC_MODE_GLOBAL&&mode!=JSGC_MODE_ZONE&&mode!=JSGC_MODE_INCREMENTAL){returnfalse;}mode=JSGCMode(value);break;caseJSGC_COMPACTING_ENABLED:compactingEnabled=value!=0;break;default:if(!tunables.setParameter(key,value,lock))returnfalse;for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){zone->threshold.updateAfterGC(zone->usage.gcBytes(),GC_NORMAL,tunables,schedulingState,lock);}}returntrue;}boolGCSchedulingTunables::setParameter(JSGCParamKeykey,uint32_tvalue,constAutoLockGC&lock){// Limit heap growth factor to one hundred times size of current heap.constdoubleMaxHeapGrowthFactor=100;switch(key){caseJSGC_MAX_BYTES:gcMaxBytes_=value;break;caseJSGC_MAX_NURSERY_BYTES:gcMaxNurseryBytes_=value;break;caseJSGC_HIGH_FREQUENCY_TIME_LIMIT:highFrequencyThresholdUsec_=value*PRMJ_USEC_PER_MSEC;break;caseJSGC_HIGH_FREQUENCY_LOW_LIMIT:{uint64_tnewLimit=(uint64_t)value*1024*1024;if(newLimit==UINT64_MAX)returnfalse;highFrequencyLowLimitBytes_=newLimit;if(highFrequencyLowLimitBytes_>=highFrequencyHighLimitBytes_)highFrequencyHighLimitBytes_=highFrequencyLowLimitBytes_+1;MOZ_ASSERT(highFrequencyHighLimitBytes_>highFrequencyLowLimitBytes_);break;}caseJSGC_HIGH_FREQUENCY_HIGH_LIMIT:{uint64_tnewLimit=(uint64_t)value*1024*1024;if(newLimit==0)returnfalse;highFrequencyHighLimitBytes_=newLimit;if(highFrequencyHighLimitBytes_<=highFrequencyLowLimitBytes_)highFrequencyLowLimitBytes_=highFrequencyHighLimitBytes_-1;MOZ_ASSERT(highFrequencyHighLimitBytes_>highFrequencyLowLimitBytes_);break;}caseJSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:{doublenewGrowth=value/100.0;if(newGrowth<=0.85||newGrowth>MaxHeapGrowthFactor)returnfalse;highFrequencyHeapGrowthMax_=newGrowth;MOZ_ASSERT(highFrequencyHeapGrowthMax_/0.85>1.0);break;}caseJSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:{doublenewGrowth=value/100.0;if(newGrowth<=0.85||newGrowth>MaxHeapGrowthFactor)returnfalse;highFrequencyHeapGrowthMin_=newGrowth;MOZ_ASSERT(highFrequencyHeapGrowthMin_/0.85>1.0);break;}caseJSGC_LOW_FREQUENCY_HEAP_GROWTH:{doublenewGrowth=value/100.0;if(newGrowth<=0.9||newGrowth>MaxHeapGrowthFactor)returnfalse;lowFrequencyHeapGrowth_=newGrowth;MOZ_ASSERT(lowFrequencyHeapGrowth_/0.9>1.0);break;}caseJSGC_DYNAMIC_HEAP_GROWTH:dynamicHeapGrowthEnabled_=value!=0;break;caseJSGC_DYNAMIC_MARK_SLICE:dynamicMarkSliceEnabled_=value!=0;break;caseJSGC_ALLOCATION_THRESHOLD:gcZoneAllocThresholdBase_=value*1024*1024;break;caseJSGC_MIN_EMPTY_CHUNK_COUNT:minEmptyChunkCount_=value;if(minEmptyChunkCount_>maxEmptyChunkCount_)maxEmptyChunkCount_=minEmptyChunkCount_;MOZ_ASSERT(maxEmptyChunkCount_>=minEmptyChunkCount_);break;caseJSGC_MAX_EMPTY_CHUNK_COUNT:maxEmptyChunkCount_=value;if(minEmptyChunkCount_>maxEmptyChunkCount_)minEmptyChunkCount_=maxEmptyChunkCount_;MOZ_ASSERT(maxEmptyChunkCount_>=minEmptyChunkCount_);break;caseJSGC_REFRESH_FRAME_SLICES_ENABLED:refreshFrameSlicesEnabled_=value!=0;break;default:MOZ_CRASH("Unknown GC parameter.");}returntrue;}uint32_tGCRuntime::getParameter(JSGCParamKeykey,constAutoLockGC&lock){switch(key){caseJSGC_MAX_BYTES:returnuint32_t(tunables.gcMaxBytes());caseJSGC_MAX_MALLOC_BYTES:returnmallocCounter.maxBytes();caseJSGC_BYTES:returnuint32_t(usage.gcBytes());caseJSGC_MODE:returnuint32_t(mode);caseJSGC_UNUSED_CHUNKS:returnuint32_t(emptyChunks(lock).count());caseJSGC_TOTAL_CHUNKS:returnuint32_t(fullChunks(lock).count()+availableChunks(lock).count()+emptyChunks(lock).count());caseJSGC_SLICE_TIME_BUDGET:if(defaultTimeBudget_.ref()==SliceBudget::UnlimitedTimeBudget){return0;}else{MOZ_RELEASE_ASSERT(defaultTimeBudget_>=0);MOZ_RELEASE_ASSERT(defaultTimeBudget_<=UINT32_MAX);returnuint32_t(defaultTimeBudget_);}caseJSGC_MARK_STACK_LIMIT:returnmarker.maxCapacity();caseJSGC_HIGH_FREQUENCY_TIME_LIMIT:returntunables.highFrequencyThresholdUsec()/PRMJ_USEC_PER_MSEC;caseJSGC_HIGH_FREQUENCY_LOW_LIMIT:returntunables.highFrequencyLowLimitBytes()/1024/1024;caseJSGC_HIGH_FREQUENCY_HIGH_LIMIT:returntunables.highFrequencyHighLimitBytes()/1024/1024;caseJSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:returnuint32_t(tunables.highFrequencyHeapGrowthMax()*100);caseJSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:returnuint32_t(tunables.highFrequencyHeapGrowthMin()*100);caseJSGC_LOW_FREQUENCY_HEAP_GROWTH:returnuint32_t(tunables.lowFrequencyHeapGrowth()*100);caseJSGC_DYNAMIC_HEAP_GROWTH:returntunables.isDynamicHeapGrowthEnabled();caseJSGC_DYNAMIC_MARK_SLICE:returntunables.isDynamicMarkSliceEnabled();caseJSGC_ALLOCATION_THRESHOLD:returntunables.gcZoneAllocThresholdBase()/1024/1024;caseJSGC_MIN_EMPTY_CHUNK_COUNT:returntunables.minEmptyChunkCount(lock);caseJSGC_MAX_EMPTY_CHUNK_COUNT:returntunables.maxEmptyChunkCount();caseJSGC_COMPACTING_ENABLED:returncompactingEnabled;caseJSGC_REFRESH_FRAME_SLICES_ENABLED:returntunables.areRefreshFrameSlicesEnabled();default:MOZ_ASSERT(key==JSGC_NUMBER);returnuint32_t(number);}}voidGCRuntime::setMarkStackLimit(size_tlimit,AutoLockGC&lock){MOZ_ASSERT(!JS::CurrentThreadIsHeapBusy());AutoUnlockGCunlock(lock);AutoStopVerifyingBarrierspauseVerification(rt,false);marker.setMaxCapacity(limit);}boolGCRuntime::addBlackRootsTracer(JSTraceDataOptraceOp,void*data){AssertHeapIsIdle();return!!blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp,data));}voidGCRuntime::removeBlackRootsTracer(JSTraceDataOptraceOp,void*data){// Can be called from finalizersfor(size_ti=0;i<blackRootTracers.ref().length();i++){Callback<JSTraceDataOp>*e=&blackRootTracers.ref()[i];if(e->op==traceOp&&e->data==data){blackRootTracers.ref().erase(e);}}}voidGCRuntime::setGrayRootsTracer(JSTraceDataOptraceOp,void*data){AssertHeapIsIdle();grayRootTracer.op=traceOp;grayRootTracer.data=data;}voidGCRuntime::setGCCallback(JSGCCallbackcallback,void*data){gcCallback.op=callback;gcCallback.data=data;}voidGCRuntime::callGCCallback(JSGCStatusstatus)const{if(gcCallback.op)gcCallback.op(TlsContext.get(),status,gcCallback.data);}voidGCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallbackcallback,void*data){tenuredCallback.op=callback;tenuredCallback.data=data;}voidGCRuntime::callObjectsTenuredCallback(){if(tenuredCallback.op)tenuredCallback.op(TlsContext.get(),tenuredCallback.data);}namespace{classAutoNotifyGCActivity{public:explicitAutoNotifyGCActivity(GCRuntime&gc):gc_(gc){if(!gc_.isIncrementalGCInProgress())gc_.callGCCallback(JSGC_BEGIN);}~AutoNotifyGCActivity(){if(!gc_.isIncrementalGCInProgress())gc_.callGCCallback(JSGC_END);}private:GCRuntime&gc_;};}// (anon)boolGCRuntime::addFinalizeCallback(JSFinalizeCallbackcallback,void*data){returnfinalizeCallbacks.ref().append(Callback<JSFinalizeCallback>(callback,data));}voidGCRuntime::removeFinalizeCallback(JSFinalizeCallbackcallback){for(Callback<JSFinalizeCallback>*p=finalizeCallbacks.ref().begin();p<finalizeCallbacks.ref().end();p++){if(p->op==callback){finalizeCallbacks.ref().erase(p);break;}}}voidGCRuntime::callFinalizeCallbacks(FreeOp*fop,JSFinalizeStatusstatus)const{for(auto&p:finalizeCallbacks.ref())p.op(fop,status,!isFull,p.data);}boolGCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallbackcallback,void*data){returnupdateWeakPointerZonesCallbacks.ref().append(Callback<JSWeakPointerZonesCallback>(callback,data));}voidGCRuntime::removeWeakPointerZonesCallback(JSWeakPointerZonesCallbackcallback){for(auto&p:updateWeakPointerZonesCallbacks.ref()){if(p.op==callback){updateWeakPointerZonesCallbacks.ref().erase(&p);break;}}}voidGCRuntime::callWeakPointerZonesCallbacks()const{for(autoconst&p:updateWeakPointerZonesCallbacks.ref())p.op(TlsContext.get(),p.data);}boolGCRuntime::addWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallbackcallback,void*data){returnupdateWeakPointerCompartmentCallbacks.ref().append(Callback<JSWeakPointerCompartmentCallback>(callback,data));}voidGCRuntime::removeWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallbackcallback){for(auto&p:updateWeakPointerCompartmentCallbacks.ref()){if(p.op==callback){updateWeakPointerCompartmentCallbacks.ref().erase(&p);break;}}}voidGCRuntime::callWeakPointerCompartmentCallbacks(JSCompartment*comp)const{for(autoconst&p:updateWeakPointerCompartmentCallbacks.ref())p.op(TlsContext.get(),comp,p.data);}JS::GCSliceCallbackGCRuntime::setSliceCallback(JS::GCSliceCallbackcallback){returnstats().setSliceCallback(callback);}JS::GCNurseryCollectionCallbackGCRuntime::setNurseryCollectionCallback(JS::GCNurseryCollectionCallbackcallback){returnstats().setNurseryCollectionCallback(callback);}JS::DoCycleCollectionCallbackGCRuntime::setDoCycleCollectionCallback(JS::DoCycleCollectionCallbackcallback){autoprior=gcDoCycleCollectionCallback;gcDoCycleCollectionCallback=Callback<JS::DoCycleCollectionCallback>(callback,nullptr);returnprior.op;}voidGCRuntime::callDoCycleCollectionCallback(JSContext*cx){if(gcDoCycleCollectionCallback.op)gcDoCycleCollectionCallback.op(cx);}boolGCRuntime::addRoot(Value*vp,constchar*name){/* * Sometimes Firefox will hold weak references to objects and then convert * them to strong references by calling AddRoot (e.g., via PreserveWrapper, * or ModifyBusyCount in workers). We need a read barrier to cover these * cases. */if(isIncrementalGCInProgress())GCPtrValue::writeBarrierPre(*vp);returnrootsHash.ref().put(vp,name);}voidGCRuntime::removeRoot(Value*vp){rootsHash.ref().remove(vp);poke();}externJS_FRIEND_API(bool)js::AddRawValueRoot(JSContext*cx,Value*vp,constchar*name){MOZ_ASSERT(vp);MOZ_ASSERT(name);boolok=cx->runtime()->gc.addRoot(vp,name);if(!ok)JS_ReportOutOfMemory(cx);returnok;}externJS_FRIEND_API(void)js::RemoveRawValueRoot(JSContext*cx,Value*vp){cx->runtime()->gc.removeRoot(vp);}voidGCRuntime::setMaxMallocBytes(size_tvalue){mallocCounter.setMax(value);for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next())zone->setGCMaxMallocBytes(value);}voidGCRuntime::updateMallocCounter(JS::Zone*zone,size_tnbytes){booltriggered=mallocCounter.update(this,nbytes);if(!triggered&&zone)zone->updateMallocCounter(nbytes);}doubleZoneHeapThreshold::allocTrigger(boolhighFrequencyGC)const{return(highFrequencyGC?0.85:0.9)*gcTriggerBytes();}/* static */doubleZoneHeapThreshold::computeZoneHeapGrowthFactorForHeapSize(size_tlastBytes,constGCSchedulingTunables&tunables,constGCSchedulingState&state){if(!tunables.isDynamicHeapGrowthEnabled())return3.0;// For small zones, our collection heuristics do not matter much: favor// something simple in this case.if(lastBytes<1*1024*1024)returntunables.lowFrequencyHeapGrowth();// If GC's are not triggering in rapid succession, use a lower threshold so// that we will collect garbage sooner.if(!state.inHighFrequencyGCMode())returntunables.lowFrequencyHeapGrowth();// The heap growth factor depends on the heap size after a GC and the GC// frequency. For low frequency GCs (more than 1sec between GCs) we let// the heap grow to 150%. For high frequency GCs we let the heap grow// depending on the heap size:// lastBytes < highFrequencyLowLimit: 300%// lastBytes > highFrequencyHighLimit: 150%// otherwise: linear interpolation between 300% and 150% based on lastBytes// Use shorter names to make the operation comprehensible.doubleminRatio=tunables.highFrequencyHeapGrowthMin();doublemaxRatio=tunables.highFrequencyHeapGrowthMax();doublelowLimit=tunables.highFrequencyLowLimitBytes();doublehighLimit=tunables.highFrequencyHighLimitBytes();if(lastBytes<=lowLimit)returnmaxRatio;if(lastBytes>=highLimit)returnminRatio;doublefactor=maxRatio-((maxRatio-minRatio)*((lastBytes-lowLimit)/(highLimit-lowLimit)));MOZ_ASSERT(factor>=minRatio);MOZ_ASSERT(factor<=maxRatio);returnfactor;}/* static */size_tZoneHeapThreshold::computeZoneTriggerBytes(doublegrowthFactor,size_tlastBytes,JSGCInvocationKindgckind,constGCSchedulingTunables&tunables,constAutoLockGC&lock){size_tbase=gckind==GC_SHRINK?Max(lastBytes,tunables.minEmptyChunkCount(lock)*ChunkSize):Max(lastBytes,tunables.gcZoneAllocThresholdBase());doubletrigger=double(base)*growthFactor;returnsize_t(Min(double(tunables.gcMaxBytes()),trigger));}voidZoneHeapThreshold::updateAfterGC(size_tlastBytes,JSGCInvocationKindgckind,constGCSchedulingTunables&tunables,constGCSchedulingState&state,constAutoLockGC&lock){gcHeapGrowthFactor_=computeZoneHeapGrowthFactorForHeapSize(lastBytes,tunables,state);gcTriggerBytes_=computeZoneTriggerBytes(gcHeapGrowthFactor_,lastBytes,gckind,tunables,lock);}voidZoneHeapThreshold::updateForRemovedArena(constGCSchedulingTunables&tunables){size_tamount=ArenaSize*gcHeapGrowthFactor_;MOZ_ASSERT(amount>0);if((gcTriggerBytes_<amount)||(gcTriggerBytes_-amount<tunables.gcZoneAllocThresholdBase()*gcHeapGrowthFactor_)){return;}gcTriggerBytes_-=amount;}voidGCMarker::delayMarkingArena(Arena*arena){if(arena->hasDelayedMarking){/* Arena already scheduled to be marked later */return;}arena->setNextDelayedMarking(unmarkedArenaStackTop);unmarkedArenaStackTop=arena;#ifdef DEBUGmarkLaterArenas++;#endif}voidGCMarker::delayMarkingChildren(constvoid*thing){constTenuredCell*cell=TenuredCell::fromPointer(thing);cell->arena()->markOverflow=1;delayMarkingArena(cell->arena());}inlinevoidArenaLists::prepareForIncrementalGC(){purge();for(autoi:AllAllocKinds())arenaLists(i).moveCursorToEnd();}/* Compacting GC */boolGCRuntime::shouldCompact(){// Compact on shrinking GC if enabled, but skip compacting in incremental// GCs if we are currently animating.returninvocationKind==GC_SHRINK&&isCompactingGCEnabled()&&(!isIncremental||rt->lastAnimationTime+PRMJ_USEC_PER_SEC<PRMJ_Now());}boolGCRuntime::isCompactingGCEnabled()const{returncompactingEnabled&&TlsContext.get()->compactingDisabledCount==0;}AutoDisableCompactingGC::AutoDisableCompactingGC(JSContext*cx):cx(cx){++cx->compactingDisabledCount;if(cx->runtime()->gc.isIncrementalGCInProgress()&&cx->runtime()->gc.isCompactingGc())FinishGC(cx);}AutoDisableCompactingGC::~AutoDisableCompactingGC(){MOZ_ASSERT(cx->compactingDisabledCount>0);--cx->compactingDisabledCount;}staticboolCanRelocateZone(Zone*zone){return!zone->isAtomsZone()&&!zone->isSelfHostingZone();}staticconstAllocKindAllocKindsToRelocate[]={AllocKind::FUNCTION,AllocKind::FUNCTION_EXTENDED,AllocKind::OBJECT0,AllocKind::OBJECT0_BACKGROUND,AllocKind::OBJECT2,AllocKind::OBJECT2_BACKGROUND,AllocKind::OBJECT4,AllocKind::OBJECT4_BACKGROUND,AllocKind::OBJECT8,AllocKind::OBJECT8_BACKGROUND,AllocKind::OBJECT12,AllocKind::OBJECT12_BACKGROUND,AllocKind::OBJECT16,AllocKind::OBJECT16_BACKGROUND,AllocKind::SCRIPT,AllocKind::LAZY_SCRIPT,AllocKind::SHAPE,AllocKind::ACCESSOR_SHAPE,AllocKind::BASE_SHAPE,AllocKind::FAT_INLINE_STRING,AllocKind::STRING,AllocKind::EXTERNAL_STRING,AllocKind::FAT_INLINE_ATOM,AllocKind::ATOM,AllocKind::SCOPE,AllocKind::REGEXP_SHARED};Arena*ArenaList::removeRemainingArenas(Arena**arenap){// This is only ever called to remove arenas that are after the cursor, so// we don't need to update it.#ifdef DEBUGfor(Arena*arena=*arenap;arena;arena=arena->next)MOZ_ASSERT(cursorp_!=&arena->next);#endifArena*remainingArenas=*arenap;*arenap=nullptr;check();returnremainingArenas;}staticboolShouldRelocateAllArenas(JS::gcreason::Reasonreason){returnreason==JS::gcreason::DEBUG_GC;}/* * Choose which arenas to relocate all cells from. Return an arena cursor that * can be passed to removeRemainingArenas(). */Arena**ArenaList::pickArenasToRelocate(size_t&arenaTotalOut,size_t&relocTotalOut){// Relocate the greatest number of arenas such that the number of used cells// in relocated arenas is less than or equal to the number of free cells in// unrelocated arenas. In other words we only relocate cells we can move// into existing arenas, and we choose the least full areans to relocate.//// This is made easier by the fact that the arena list has been sorted in// descending order of number of used cells, so we will always relocate a// tail of the arena list. All we need to do is find the point at which to// start relocating.check();if(isCursorAtEnd())returnnullptr;Arena**arenap=cursorp_;// Next arena to consider for relocation.size_tpreviousFreeCells=0;// Count of free cells before arenap.size_tfollowingUsedCells=0;// Count of used cells after arenap.size_tfullArenaCount=0;// Number of full arenas (not relocated).size_tnonFullArenaCount=0;// Number of non-full arenas (considered for relocation).size_tarenaIndex=0;// Index of the next arena to consider.for(Arena*arena=head_;arena!=*cursorp_;arena=arena->next)fullArenaCount++;for(Arena*arena=*cursorp_;arena;arena=arena->next){followingUsedCells+=arena->countUsedCells();nonFullArenaCount++;}mozilla::DebugOnly<size_t>lastFreeCells(0);size_tcellsPerArena=Arena::thingsPerArena((*arenap)->getAllocKind());while(*arenap){Arena*arena=*arenap;if(followingUsedCells<=previousFreeCells)break;size_tfreeCells=arena->countFreeCells();size_tusedCells=cellsPerArena-freeCells;followingUsedCells-=usedCells;#ifdef DEBUGMOZ_ASSERT(freeCells>=lastFreeCells);lastFreeCells=freeCells;#endifpreviousFreeCells+=freeCells;arenap=&arena->next;arenaIndex++;}size_trelocCount=nonFullArenaCount-arenaIndex;MOZ_ASSERT(relocCount<nonFullArenaCount);MOZ_ASSERT((relocCount==0)==(!*arenap));arenaTotalOut+=fullArenaCount+nonFullArenaCount;relocTotalOut+=relocCount;returnarenap;}#ifdef DEBUGinlineboolPtrIsInRange(constvoid*ptr,constvoid*start,size_tlength){returnuintptr_t(ptr)-uintptr_t(start)<length;}#endifstaticTenuredCell*AllocRelocatedCell(Zone*zone,AllocKindthingKind,size_tthingSize){AutoEnterOOMUnsafeRegionoomUnsafe;void*dstAlloc=zone->arenas.allocateFromFreeList(thingKind,thingSize);if(!dstAlloc)dstAlloc=GCRuntime::refillFreeListInGC(zone,thingKind);if(!dstAlloc){// This can only happen in zeal mode or debug builds as we don't// otherwise relocate more cells than we have existing free space// for.oomUnsafe.crash("Could not allocate new arena while compacting");}returnTenuredCell::fromPointer(dstAlloc);}staticvoidRelocateCell(Zone*zone,TenuredCell*src,AllocKindthingKind,size_tthingSize){JS::AutoSuppressGCAnalysisnogc(TlsContext.get());// Allocate a new cell.MOZ_ASSERT(zone==src->zone());TenuredCell*dst=AllocRelocatedCell(zone,thingKind,thingSize);// Copy source cell contents to destination.memcpy(dst,src,thingSize);// Move any uid attached to the object.src->zone()->transferUniqueId(dst,src);if(IsObjectAllocKind(thingKind)){JSObject*srcObj=static_cast<JSObject*>(static_cast<Cell*>(src));JSObject*dstObj=static_cast<JSObject*>(static_cast<Cell*>(dst));if(srcObj->isNative()){NativeObject*srcNative=&srcObj->as<NativeObject>();NativeObject*dstNative=&dstObj->as<NativeObject>();// Fixup the pointer to inline object elements if necessary.if(srcNative->hasFixedElements()){uint32_tnumShifted=srcNative->getElementsHeader()->numShiftedElements();dstNative->setFixedElements(numShifted);}// For copy-on-write objects that own their elements, fix up the// owner pointer to point to the relocated object.if(srcNative->denseElementsAreCopyOnWrite()){GCPtrNativeObject&owner=dstNative->getElementsHeader()->ownerObject();if(owner==srcNative)owner=dstNative;}}elseif(srcObj->is<ProxyObject>()){if(srcObj->as<ProxyObject>().usingInlineValueArray())dstObj->as<ProxyObject>().setInlineValueArray();}// Call object moved hook if present.if(JSObjectMovedOpop=srcObj->getClass()->extObjectMovedOp())op(dstObj,srcObj);MOZ_ASSERT_IF(dstObj->isNative(),!PtrIsInRange((constValue*)dstObj->as<NativeObject>().getDenseElements(),src,thingSize));}// Copy the mark bits.dst->copyMarkBitsFrom(src);// Mark source cell as forwarded and leave a pointer to the destination.RelocationOverlay*overlay=RelocationOverlay::fromCell(src);overlay->forwardTo(dst);}staticvoidRelocateArena(Arena*arena,SliceBudget&sliceBudget){MOZ_ASSERT(arena->allocated());MOZ_ASSERT(!arena->hasDelayedMarking);MOZ_ASSERT(!arena->markOverflow);MOZ_ASSERT(!arena->allocatedDuringIncremental);MOZ_ASSERT(arena->bufferedCells()->isEmpty());Zone*zone=arena->zone;AllocKindthingKind=arena->getAllocKind();size_tthingSize=arena->getThingSize();for(ArenaCellIterUnderGCi(arena);!i.done();i.next()){RelocateCell(zone,i.getCell(),thingKind,thingSize);sliceBudget.step();}#ifdef DEBUGfor(ArenaCellIterUnderGCi(arena);!i.done();i.next()){TenuredCell*src=i.getCell();MOZ_ASSERT(RelocationOverlay::isCellForwarded(src));TenuredCell*dest=Forwarded(src);MOZ_ASSERT(src->isMarked(BLACK)==dest->isMarked(BLACK));MOZ_ASSERT(src->isMarked(GRAY)==dest->isMarked(GRAY));}#endif}staticinlineboolShouldProtectRelocatedArenas(JS::gcreason::Reasonreason){// For zeal mode collections we don't release the relocated arenas// immediately. Instead we protect them and keep them around until the next// collection so we can catch any stray accesses to them.#ifdef DEBUGreturnreason==JS::gcreason::DEBUG_GC;#elsereturnfalse;#endif}/* * Relocate all arenas identified by pickArenasToRelocate: for each arena, * relocate each cell within it, then add it to a list of relocated arenas. */Arena*ArenaList::relocateArenas(Arena*toRelocate,Arena*relocated,SliceBudget&sliceBudget,gcstats::Statistics&stats){check();while(Arena*arena=toRelocate){toRelocate=arena->next;RelocateArena(arena,sliceBudget);// Prepend to list of relocated arenasarena->next=relocated;relocated=arena;stats.count(gcstats::STAT_ARENA_RELOCATED);}check();returnrelocated;}// Skip compacting zones unless we can free a certain proportion of their GC// heap memory.staticconstdoubleMIN_ZONE_RECLAIM_PERCENT=2.0;staticboolShouldRelocateZone(size_tarenaCount,size_trelocCount,JS::gcreason::Reasonreason){if(relocCount==0)returnfalse;if(IsOOMReason(reason))returntrue;return(relocCount*100.0)/arenaCount>=MIN_ZONE_RECLAIM_PERCENT;}boolArenaLists::relocateArenas(Zone*zone,Arena*&relocatedListOut,JS::gcreason::Reasonreason,SliceBudget&sliceBudget,gcstats::Statistics&stats){// This is only called from the active thread while we are doing a GC, so// there is no need to lock.MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime_));MOZ_ASSERT(runtime_->gc.isHeapCompacting());MOZ_ASSERT(!runtime_->gc.isBackgroundSweeping());// Clear all the free lists.purge();if(ShouldRelocateAllArenas(reason)){zone->prepareForCompacting();for(autokind:AllocKindsToRelocate){ArenaList&al=arenaLists(kind);Arena*allArenas=al.head();al.clear();relocatedListOut=al.relocateArenas(allArenas,relocatedListOut,sliceBudget,stats);}}else{size_tarenaCount=0;size_trelocCount=0;AllAllocKindArray<Arena**>toRelocate;for(autokind:AllocKindsToRelocate)toRelocate[kind]=arenaLists(kind).pickArenasToRelocate(arenaCount,relocCount);if(!ShouldRelocateZone(arenaCount,relocCount,reason))returnfalse;zone->prepareForCompacting();for(autokind:AllocKindsToRelocate){if(toRelocate[kind]){ArenaList&al=arenaLists(kind);Arena*arenas=al.removeRemainingArenas(toRelocate[kind]);relocatedListOut=al.relocateArenas(arenas,relocatedListOut,sliceBudget,stats);}}}returntrue;}boolGCRuntime::relocateArenas(Zone*zone,JS::gcreason::Reasonreason,Arena*&relocatedListOut,SliceBudget&sliceBudget){gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::COMPACT_MOVE);MOZ_ASSERT(!zone->isPreservingCode());MOZ_ASSERT(CanRelocateZone(zone));js::CancelOffThreadIonCompile(rt,JS::Zone::Compact);if(!zone->arenas.relocateArenas(zone,relocatedListOut,reason,sliceBudget,stats()))returnfalse;#ifdef DEBUG// Check that we did as much compaction as we should have. There// should always be less than one arena's worth of free cells.for(autoi:AllocKindsToRelocate){ArenaList&al=zone->arenas.arenaLists(i);size_tfreeCells=0;for(Arena*arena=al.arenaAfterCursor();arena;arena=arena->next)freeCells+=arena->countFreeCells();MOZ_ASSERT(freeCells<Arena::thingsPerArena(i));}#endifreturntrue;}template<typenameT>inlinevoidMovingTracer::updateEdge(T**thingp){autothing=*thingp;if(thing->runtimeFromAnyThread()==runtime()&&IsForwarded(thing))*thingp=Forwarded(thing);}voidMovingTracer::onObjectEdge(JSObject**objp){updateEdge(objp);}voidMovingTracer::onShapeEdge(Shape**shapep){updateEdge(shapep);}voidMovingTracer::onStringEdge(JSString**stringp){updateEdge(stringp);}voidMovingTracer::onScriptEdge(JSScript**scriptp){updateEdge(scriptp);}voidMovingTracer::onLazyScriptEdge(LazyScript**lazyp){updateEdge(lazyp);}voidMovingTracer::onBaseShapeEdge(BaseShape**basep){updateEdge(basep);}voidMovingTracer::onScopeEdge(Scope**scopep){updateEdge(scopep);}voidMovingTracer::onRegExpSharedEdge(RegExpShared**sharedp){updateEdge(sharedp);}voidZone::prepareForCompacting(){FreeOp*fop=runtimeFromActiveCooperatingThread()->defaultFreeOp();discardJitCode(fop);}voidGCRuntime::sweepTypesAfterCompacting(Zone*zone){FreeOp*fop=rt->defaultFreeOp();zone->beginSweepTypes(fop,rt->gc.releaseObservedTypes&&!zone->isPreservingCode());AutoClearTypeInferenceStateOnOOMoom(zone);for(autoscript=zone->cellIter<JSScript>();!script.done();script.next())script->maybeSweepTypes(&oom);for(autogroup=zone->cellIter<ObjectGroup>();!group.done();group.next())group->maybeSweep(&oom);zone->types.endSweep(rt);}voidGCRuntime::sweepZoneAfterCompacting(Zone*zone){MOZ_ASSERT(zone->isCollecting());FreeOp*fop=rt->defaultFreeOp();sweepTypesAfterCompacting(zone);zone->sweepBreakpoints(fop);zone->sweepWeakMaps();for(auto*cache:zone->weakCaches())cache->sweep();if(jit::JitZone*jitZone=zone->jitZone())jitZone->sweep(fop);for(CompartmentsInZoneIterc(zone);!c.done();c.next()){c->objectGroups.sweep(fop);c->sweepRegExps();c->sweepSavedStacks();c->sweepTemplateLiteralMap();c->sweepVarNames();c->sweepGlobalObject();c->sweepSelfHostingScriptSource();c->sweepDebugEnvironments();c->sweepJitCompartment(fop);c->sweepNativeIterators();c->sweepTemplateObjects();}}template<typenameT>staticinlinevoidUpdateCellPointers(MovingTracer*trc,T*cell){cell->fixupAfterMovingGC();cell->traceChildren(trc);}template<typenameT>staticvoidUpdateArenaPointersTyped(MovingTracer*trc,Arena*arena,JS::TraceKindtraceKind){for(ArenaCellIterUnderGCi(arena);!i.done();i.next())UpdateCellPointers(trc,reinterpret_cast<T*>(i.getCell()));}/* * Update the internal pointers for all cells in an arena. */staticvoidUpdateArenaPointers(MovingTracer*trc,Arena*arena){AllocKindkind=arena->getAllocKind();switch(kind){#define EXPAND_CASE(allocKind, traceKind, type, sizedType) \ case AllocKind::allocKind: \ UpdateArenaPointersTyped<type>(trc, arena, JS::TraceKind::traceKind); \ return;FOR_EACH_ALLOCKIND(EXPAND_CASE)#undef EXPAND_CASEdefault:MOZ_CRASH("Invalid alloc kind for UpdateArenaPointers");}}namespacejs{namespacegc{structArenaListSegment{Arena*begin;Arena*end;};structArenasToUpdate{ArenasToUpdate(Zone*zone,AllocKindskinds);booldone(){returnkind==AllocKind::LIMIT;}ArenaListSegmentgetArenasToUpdate(AutoLockHelperThreadState&lock,unsignedmaxLength);private:AllocKindskinds;// Selects which thing kinds to updateZone*zone;// Zone to processAllocKindkind;// Current alloc kind to processArena*arena;// Next arena to processAllocKindnextAllocKind(AllocKindi){returnAllocKind(uint8_t(i)+1);}boolshouldProcessKind(AllocKindkind);Arena*next(AutoLockHelperThreadState&lock);};ArenasToUpdate::ArenasToUpdate(Zone*zone,AllocKindskinds):kinds(kinds),zone(zone),kind(AllocKind::FIRST),arena(nullptr){MOZ_ASSERT(zone->isGCCompacting());}Arena*ArenasToUpdate::next(AutoLockHelperThreadState&lock){// Find the next arena to update.//// This iterates through the GC thing kinds filtered by shouldProcessKind(),// and then through thea arenas of that kind. All state is held in the// object and we just return when we find an arena.for(;kind<AllocKind::LIMIT;kind=nextAllocKind(kind)){if(kinds.contains(kind)){if(!arena)arena=zone->arenas.getFirstArena(kind);elsearena=arena->next;if(arena)returnarena;}}MOZ_ASSERT(!arena);MOZ_ASSERT(done());returnnullptr;}ArenaListSegmentArenasToUpdate::getArenasToUpdate(AutoLockHelperThreadState&lock,unsignedmaxLength){Arena*begin=next(lock);if(!begin)return{nullptr,nullptr};Arena*last=begin;unsignedcount=1;while(last->next&&count<maxLength){last=last->next;count++;}arena=last;return{begin,last->next};}structUpdatePointersTask:publicGCParallelTask{// Maximum number of arenas to update in one block.#ifdef DEBUGstaticconstunsignedMaxArenasToProcess=16;#elsestaticconstunsignedMaxArenasToProcess=256;#endifUpdatePointersTask(JSRuntime*rt,ArenasToUpdate*source,AutoLockHelperThreadState&lock):GCParallelTask(rt),source_(source){arenas_.begin=nullptr;arenas_.end=nullptr;}~UpdatePointersTask()override{join();}private:ArenasToUpdate*source_;ArenaListSegmentarenas_;virtualvoidrun()override;boolgetArenasToUpdate();voidupdateArenas();};boolUpdatePointersTask::getArenasToUpdate(){AutoLockHelperThreadStatelock;arenas_=source_->getArenasToUpdate(lock,MaxArenasToProcess);returnarenas_.begin!=nullptr;}voidUpdatePointersTask::updateArenas(){MovingTracertrc(runtime());for(Arena*arena=arenas_.begin;arena!=arenas_.end;arena=arena->next)UpdateArenaPointers(&trc,arena);}/* virtual */voidUpdatePointersTask::run(){// These checks assert when run in parallel.AutoDisableProxyChecknoProxyCheck;while(getArenasToUpdate())updateArenas();}}// namespace gc}// namespace jsstaticconstsize_tMinCellUpdateBackgroundTasks=2;staticconstsize_tMaxCellUpdateBackgroundTasks=8;staticsize_tCellUpdateBackgroundTaskCount(){if(!CanUseExtraThreads())return0;size_ttargetTaskCount=HelperThreadState().cpuCount/2;returnMin(Max(targetTaskCount,MinCellUpdateBackgroundTasks),MaxCellUpdateBackgroundTasks);}staticboolCanUpdateKindInBackground(AllocKindkind){// We try to update as many GC things in parallel as we can, but there are// kinds for which this might not be safe:// - we assume JSObjects that are foreground finalized are not safe to// update in parallel// - updating a shape touches child shapes in fixupShapeTreeAfterMovingGC()if(!js::gc::IsBackgroundFinalized(kind)||IsShapeAllocKind(kind))returnfalse;returntrue;}staticAllocKindsForegroundUpdateKinds(AllocKindskinds){AllocKindsresult;for(AllocKindkind:kinds){if(!CanUpdateKindInBackground(kind))result+=kind;}returnresult;}voidGCRuntime::updateTypeDescrObjects(MovingTracer*trc,Zone*zone){zone->typeDescrObjects().sweep();for(autor=zone->typeDescrObjects().all();!r.empty();r.popFront())UpdateCellPointers(trc,r.front());}voidGCRuntime::updateCellPointers(MovingTracer*trc,Zone*zone,AllocKindskinds,size_tbgTaskCount){AllocKindsfgKinds=bgTaskCount==0?kinds:ForegroundUpdateKinds(kinds);AllocKindsbgKinds=kinds-fgKinds;ArenasToUpdatefgArenas(zone,fgKinds);ArenasToUpdatebgArenas(zone,bgKinds);Maybe<UpdatePointersTask>fgTask;Maybe<UpdatePointersTask>bgTasks[MaxCellUpdateBackgroundTasks];size_ttasksStarted=0;{AutoLockHelperThreadStatelock;fgTask.emplace(rt,&fgArenas,lock);for(size_ti=0;i<bgTaskCount&&!bgArenas.done();i++){bgTasks[i].emplace(rt,&bgArenas,lock);startTask(*bgTasks[i],gcstats::PhaseKind::COMPACT_UPDATE_CELLS,lock);tasksStarted=i;}}fgTask->runFromActiveCooperatingThread(rt);{AutoLockHelperThreadStatelock;for(size_ti=0;i<tasksStarted;i++)joinTask(*bgTasks[i],gcstats::PhaseKind::COMPACT_UPDATE_CELLS,lock);}}// After cells have been relocated any pointers to a cell's old locations must// be updated to point to the new location. This happens by iterating through// all cells in heap and tracing their children (non-recursively) to update// them.//// This is complicated by the fact that updating a GC thing sometimes depends on// making use of other GC things. After a moving GC these things may not be in// a valid state since they may contain pointers which have not been updated// yet.//// The main dependencies are://// - Updating a JSObject makes use of its shape// - Updating a typed object makes use of its type descriptor object//// This means we require at least three phases for update://// 1) shapes// 2) typed object type descriptor objects// 3) all other objects//// Since we want to minimize the number of phases, we put everything else into// the first phase and label it the 'misc' phase.staticconstAllocKindsUpdatePhaseMisc{AllocKind::SCRIPT,AllocKind::LAZY_SCRIPT,AllocKind::BASE_SHAPE,AllocKind::SHAPE,AllocKind::ACCESSOR_SHAPE,AllocKind::OBJECT_GROUP,AllocKind::STRING,AllocKind::JITCODE,AllocKind::SCOPE};staticconstAllocKindsUpdatePhaseObjects{AllocKind::FUNCTION,AllocKind::FUNCTION_EXTENDED,AllocKind::OBJECT0,AllocKind::OBJECT0_BACKGROUND,AllocKind::OBJECT2,AllocKind::OBJECT2_BACKGROUND,AllocKind::OBJECT4,AllocKind::OBJECT4_BACKGROUND,AllocKind::OBJECT8,AllocKind::OBJECT8_BACKGROUND,AllocKind::OBJECT12,AllocKind::OBJECT12_BACKGROUND,AllocKind::OBJECT16,AllocKind::OBJECT16_BACKGROUND};voidGCRuntime::updateAllCellPointers(MovingTracer*trc,Zone*zone){size_tbgTaskCount=CellUpdateBackgroundTaskCount();updateCellPointers(trc,zone,UpdatePhaseMisc,bgTaskCount);// Update TypeDescrs before all other objects as typed objects access these// objects when we trace them.updateTypeDescrObjects(trc,zone);updateCellPointers(trc,zone,UpdatePhaseObjects,bgTaskCount);}/* * Update pointers to relocated cells in a single zone by doing a traversal of * that zone's arenas and calling per-zone sweep hooks. * * The latter is necessary to update weak references which are not marked as * part of the traversal. */voidGCRuntime::updateZonePointersToRelocatedCells(Zone*zone,AutoLockForExclusiveAccess&lock){MOZ_ASSERT(!rt->isBeingDestroyed());MOZ_ASSERT(zone->isGCCompacting());gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::COMPACT_UPDATE);MovingTracertrc(rt);zone->fixupAfterMovingGC();// Fixup compartment global pointers as these get accessed during marking.for(CompartmentsInZoneItercomp(zone);!comp.done();comp.next())comp->fixupAfterMovingGC();zone->externalStringCache().purge();// Iterate through all cells that can contain relocatable pointers to update// them. Since updating each cell is independent we try to parallelize this// as much as possible.updateAllCellPointers(&trc,zone);// Mark roots to update them.{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::MARK_ROOTS);WeakMapBase::traceZone(zone,&trc);for(CompartmentsInZoneIterc(zone);!c.done();c.next()){if(c->watchpointMap)c->watchpointMap->trace(&trc);}}// Sweep everything to fix up weak pointers.rt->gc.sweepZoneAfterCompacting(zone);// Call callbacks to get the rest of the system to fixup other untraced pointers.for(CompartmentsInZoneItercomp(zone);!comp.done();comp.next())callWeakPointerCompartmentCallbacks(comp);}/* * Update runtime-wide pointers to relocated cells. */voidGCRuntime::updateRuntimePointersToRelocatedCells(AutoLockForExclusiveAccess&lock){MOZ_ASSERT(!rt->isBeingDestroyed());gcstats::AutoPhaseap1(stats(),gcstats::PhaseKind::COMPACT_UPDATE);MovingTracertrc(rt);JSCompartment::fixupCrossCompartmentWrappersAfterMovingGC(&trc);rt->geckoProfiler().fixupStringsMapAfterMovingGC();traceRuntimeForMajorGC(&trc,lock);// Mark roots to update them.{gcstats::AutoPhaseap2(stats(),gcstats::PhaseKind::MARK_ROOTS);Debugger::traceAllForMovingGC(&trc);Debugger::traceIncomingCrossCompartmentEdges(&trc);// Mark all gray roots, making sure we call the trace callback to get the// current set.if(JSTraceDataOpop=grayRootTracer.op)(*op)(&trc,grayRootTracer.data);}// Sweep everything to fix up weak pointers.WatchpointMap::sweepAll(rt);Debugger::sweepAll(rt->defaultFreeOp());jit::JitRuntime::SweepJitcodeGlobalTable(rt);for(JS::detail::WeakCacheBase*cache:rt->weakCaches())cache->sweep();// Type inference may put more blocks here to free.blocksToFreeAfterSweeping.ref().freeAll();// Call callbacks to get the rest of the system to fixup other untraced pointers.callWeakPointerZonesCallbacks();}voidGCRuntime::protectAndHoldArenas(Arena*arenaList){for(Arena*arena=arenaList;arena;){MOZ_ASSERT(arena->allocated());Arena*next=arena->next;if(!next){// Prepend to hold list before we protect the memory.arena->next=relocatedArenasToRelease;relocatedArenasToRelease=arenaList;}ProtectPages(arena,ArenaSize);arena=next;}}voidGCRuntime::unprotectHeldRelocatedArenas(){for(Arena*arena=relocatedArenasToRelease;arena;arena=arena->next){UnprotectPages(arena,ArenaSize);MOZ_ASSERT(arena->allocated());}}voidGCRuntime::releaseRelocatedArenas(Arena*arenaList){AutoLockGClock(rt);releaseRelocatedArenasWithoutUnlocking(arenaList,lock);}voidGCRuntime::releaseRelocatedArenasWithoutUnlocking(Arena*arenaList,constAutoLockGC&lock){// Release the relocated arenas, now containing only forwarding pointersunsignedcount=0;while(arenaList){Arena*arena=arenaList;arenaList=arenaList->next;// Clear the mark bitsarena->unmarkAll();// Mark arena as emptyarena->setAsFullyUnused();#if defined(JS_CRASH_DIAGNOSTICS) || defined(JS_GC_ZEAL)JS_POISON(reinterpret_cast<void*>(arena->thingsStart()),JS_MOVED_TENURED_PATTERN,arena->getThingsSpan());#endifreleaseArena(arena,lock);++count;}}// In debug mode we don't always release relocated arenas straight away.// Sometimes protect them instead and hold onto them until the next GC sweep// phase to catch any pointers to them that didn't get forwarded.voidGCRuntime::releaseHeldRelocatedArenas(){#ifdef DEBUGunprotectHeldRelocatedArenas();Arena*arenas=relocatedArenasToRelease;relocatedArenasToRelease=nullptr;releaseRelocatedArenas(arenas);#endif}voidGCRuntime::releaseHeldRelocatedArenasWithoutUnlocking(constAutoLockGC&lock){#ifdef DEBUGunprotectHeldRelocatedArenas();releaseRelocatedArenasWithoutUnlocking(relocatedArenasToRelease,lock);relocatedArenasToRelease=nullptr;#endif}ArenaLists::ArenaLists(JSRuntime*rt,ZoneGroup*group):runtime_(rt),freeLists_(group),arenaLists_(group),backgroundFinalizeState_(),arenaListsToSweep_(),incrementalSweptArenaKind(group,AllocKind::LIMIT),incrementalSweptArenas(group),gcShapeArenasToUpdate(group,nullptr),gcAccessorShapeArenasToUpdate(group,nullptr),gcScriptArenasToUpdate(group,nullptr),gcObjectGroupArenasToUpdate(group,nullptr),savedObjectArenas_(group),savedEmptyObjectArenas(group,nullptr){for(autoi:AllAllocKinds())freeLists(i)=&placeholder;for(autoi:AllAllocKinds())backgroundFinalizeState(i)=BFS_DONE;for(autoi:AllAllocKinds())arenaListsToSweep(i)=nullptr;}voidReleaseArenaList(JSRuntime*rt,Arena*arena,constAutoLockGC&lock){Arena*next;for(;arena;arena=next){next=arena->next;rt->gc.releaseArena(arena,lock);}}ArenaLists::~ArenaLists(){AutoLockGClock(runtime_);for(autoi:AllAllocKinds()){/* * We can only call this during the shutdown after the last GC when * the background finalization is disabled. */MOZ_ASSERT(backgroundFinalizeState(i)==BFS_DONE);ReleaseArenaList(runtime_,arenaLists(i).head(),lock);}ReleaseArenaList(runtime_,incrementalSweptArenas.ref().head(),lock);for(autoi:ObjectAllocKinds())ReleaseArenaList(runtime_,savedObjectArenas(i).head(),lock);ReleaseArenaList(runtime_,savedEmptyObjectArenas,lock);}voidArenaLists::queueForForegroundSweep(FreeOp*fop,constFinalizePhase&phase){gcstats::AutoPhaseap(fop->runtime()->gc.stats(),phase.statsPhase);for(autokind:phase.kinds)queueForForegroundSweep(fop,kind);}voidArenaLists::queueForForegroundSweep(FreeOp*fop,AllocKindthingKind){MOZ_ASSERT(!IsBackgroundFinalized(thingKind));MOZ_ASSERT(backgroundFinalizeState(thingKind)==BFS_DONE);MOZ_ASSERT(!arenaListsToSweep(thingKind));arenaListsToSweep(thingKind)=arenaLists(thingKind).head();arenaLists(thingKind).clear();}voidArenaLists::queueForBackgroundSweep(FreeOp*fop,constFinalizePhase&phase){gcstats::AutoPhaseap(fop->runtime()->gc.stats(),phase.statsPhase);for(autokind:phase.kinds)queueForBackgroundSweep(fop,kind);}inlinevoidArenaLists::queueForBackgroundSweep(FreeOp*fop,AllocKindthingKind){MOZ_ASSERT(IsBackgroundFinalized(thingKind));ArenaList*al=&arenaLists(thingKind);if(al->isEmpty()){MOZ_ASSERT(backgroundFinalizeState(thingKind)==BFS_DONE);return;}MOZ_ASSERT(backgroundFinalizeState(thingKind)==BFS_DONE);arenaListsToSweep(thingKind)=al->head();al->clear();backgroundFinalizeState(thingKind)=BFS_RUN;}/*static*/voidArenaLists::backgroundFinalize(FreeOp*fop,Arena*listHead,Arena**empty){MOZ_ASSERT(listHead);MOZ_ASSERT(empty);AllocKindthingKind=listHead->getAllocKind();Zone*zone=listHead->zone;size_tthingsPerArena=Arena::thingsPerArena(thingKind);SortedArenaListfinalizedSorted(thingsPerArena);autounlimited=SliceBudget::unlimited();FinalizeArenas(fop,&listHead,finalizedSorted,thingKind,unlimited,KEEP_ARENAS);MOZ_ASSERT(!listHead);finalizedSorted.extractEmpty(empty);// When arenas are queued for background finalization, all arenas are moved// to arenaListsToSweep[], leaving the arenaLists[] empty. However, new// arenas may be allocated before background finalization finishes; now that// finalization is complete, we want to merge these lists back together.ArenaLists*lists=&zone->arenas;ArenaList*al=&lists->arenaLists(thingKind);// Flatten |finalizedSorted| into a regular ArenaList.ArenaListfinalized=finalizedSorted.toArenaList();// We must take the GC lock to be able to safely modify the ArenaList;// however, this does not by itself make the changes visible to all threads,// as not all threads take the GC lock to read the ArenaLists.// That safety is provided by the ReleaseAcquire memory ordering of the// background finalize state, which we explicitly set as the final step.{AutoLockGClock(lists->runtime_);MOZ_ASSERT(lists->backgroundFinalizeState(thingKind)==BFS_RUN);// Join |al| and |finalized| into a single list.*al=finalized.insertListWithCursorAtEnd(*al);lists->arenaListsToSweep(thingKind)=nullptr;}lists->backgroundFinalizeState(thingKind)=BFS_DONE;}voidArenaLists::mergeForegroundSweptObjectArenas(){AutoLockGClock(runtime_);ReleaseArenaList(runtime_,savedEmptyObjectArenas,lock);savedEmptyObjectArenas=nullptr;mergeSweptArenas(AllocKind::OBJECT0);mergeSweptArenas(AllocKind::OBJECT2);mergeSweptArenas(AllocKind::OBJECT4);mergeSweptArenas(AllocKind::OBJECT8);mergeSweptArenas(AllocKind::OBJECT12);mergeSweptArenas(AllocKind::OBJECT16);}inlinevoidArenaLists::mergeSweptArenas(AllocKindthingKind){ArenaList*al=&arenaLists(thingKind);ArenaList*saved=&savedObjectArenas(thingKind);*al=saved->insertListWithCursorAtEnd(*al);saved->clear();}voidArenaLists::queueForegroundThingsForSweep(FreeOp*fop){gcShapeArenasToUpdate=arenaListsToSweep(AllocKind::SHAPE);gcAccessorShapeArenasToUpdate=arenaListsToSweep(AllocKind::ACCESSOR_SHAPE);gcObjectGroupArenasToUpdate=arenaListsToSweep(AllocKind::OBJECT_GROUP);gcScriptArenasToUpdate=arenaListsToSweep(AllocKind::SCRIPT);}SliceBudget::SliceBudget():timeBudget(UnlimitedTimeBudget),workBudget(UnlimitedWorkBudget){makeUnlimited();}SliceBudget::SliceBudget(TimeBudgettime):timeBudget(time),workBudget(UnlimitedWorkBudget){if(time.budget<0){makeUnlimited();}else{// Note: TimeBudget(0) is equivalent to WorkBudget(CounterReset).deadline=PRMJ_Now()+time.budget*PRMJ_USEC_PER_MSEC;counter=CounterReset;}}SliceBudget::SliceBudget(WorkBudgetwork):timeBudget(UnlimitedTimeBudget),workBudget(work){if(work.budget<0){makeUnlimited();}else{deadline=0;counter=work.budget;}}intSliceBudget::describe(char*buffer,size_tmaxlen)const{if(isUnlimited())returnsnprintf(buffer,maxlen,"unlimited");elseif(isWorkBudget())returnsnprintf(buffer,maxlen,"work(%"PRId64")",workBudget.budget);elsereturnsnprintf(buffer,maxlen,"%"PRId64"ms",timeBudget.budget);}boolSliceBudget::checkOverBudget(){boolover=PRMJ_Now()>=deadline;if(!over)counter=CounterReset;returnover;}voidGCRuntime::requestMajorGC(JS::gcreason::Reasonreason){MOZ_ASSERT(!CurrentThreadIsPerformingGC());if(majorGCRequested())return;majorGCTriggerReason=reason;// There's no need to use RequestInterruptUrgent here. It's slower because// it has to interrupt (looping) Ion code, but loops in Ion code that// affect GC will have an explicit interrupt check.TlsContext.get()->requestInterrupt(JSContext::RequestInterruptCanWait);}voidNursery::requestMinorGC(JS::gcreason::Reasonreason)const{MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));MOZ_ASSERT(!CurrentThreadIsPerformingGC());if(minorGCRequested())return;minorGCTriggerReason_=reason;// See comment in requestMajorGC.TlsContext.get()->requestInterrupt(JSContext::RequestInterruptCanWait);}boolGCRuntime::triggerGC(JS::gcreason::Reasonreason){/* * Don't trigger GCs if this is being called off the active thread from * onTooMuchMalloc(). */if(!CurrentThreadCanAccessRuntime(rt))returnfalse;/* GC is already running. */if(JS::CurrentThreadIsHeapCollecting())returnfalse;JS::PrepareForFullGC(rt->activeContextFromOwnThread());requestMajorGC(reason);returntrue;}voidGCRuntime::maybeAllocTriggerZoneGC(Zone*zone,constAutoLockGC&lock){size_tusedBytes=zone->usage.gcBytes();size_tthresholdBytes=zone->threshold.gcTriggerBytes();if(!CurrentThreadCanAccessRuntime(rt)){/* Zones in use by a helper thread can't be collected. */MOZ_ASSERT(zone->usedByHelperThread()||zone->isAtomsZone());return;}if(usedBytes>=thresholdBytes){/* * The threshold has been surpassed, immediately trigger a GC, * which will be done non-incrementally. */triggerZoneGC(zone,JS::gcreason::ALLOC_TRIGGER);}else{boolwouldInterruptCollection;size_tigcThresholdBytes;doublezoneAllocThresholdFactor;wouldInterruptCollection=isIncrementalGCInProgress()&&!zone->isCollecting();zoneAllocThresholdFactor=wouldInterruptCollection?tunables.zoneAllocThresholdFactorAvoidInterrupt():tunables.zoneAllocThresholdFactor();igcThresholdBytes=thresholdBytes*zoneAllocThresholdFactor;if(usedBytes>=igcThresholdBytes){// Reduce the delay to the start of the next incremental slice.if(zone->gcDelayBytes<ArenaSize)zone->gcDelayBytes=0;elsezone->gcDelayBytes-=ArenaSize;if(!zone->gcDelayBytes){// Start or continue an in progress incremental GC. We do this// to try to avoid performing non-incremental GCs on zones// which allocate a lot of data, even when incremental slices// can't be triggered via scheduling in the event loop.triggerZoneGC(zone,JS::gcreason::ALLOC_TRIGGER);// Delay the next slice until a certain amount of allocation// has been performed.zone->gcDelayBytes=tunables.zoneAllocDelayBytes();}}}}boolGCRuntime::triggerZoneGC(Zone*zone,JS::gcreason::Reasonreason){MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));/* GC is already running. */if(JS::CurrentThreadIsHeapCollecting())returnfalse;#ifdef JS_GC_ZEALif(hasZealMode(ZealMode::Alloc)){MOZ_RELEASE_ASSERT(triggerGC(reason));returntrue;}#endifif(zone->isAtomsZone()){/* We can't do a zone GC of the atoms compartment. */if(TlsContext.get()->keepAtoms||rt->hasHelperThreadZones()){/* Skip GC and retrigger later, since atoms zone won't be collected * if keepAtoms is true. */fullGCForAtomsRequested_=true;returnfalse;}MOZ_RELEASE_ASSERT(triggerGC(reason));returntrue;}PrepareZoneForGC(zone);requestMajorGC(reason);returntrue;}voidGCRuntime::maybeGC(Zone*zone){MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));#ifdef JS_GC_ZEALif(hasZealMode(ZealMode::Alloc)||hasZealMode(ZealMode::Poke)){JS::PrepareForFullGC(rt->activeContextFromOwnThread());gc(GC_NORMAL,JS::gcreason::DEBUG_GC);return;}#endifif(gcIfRequested())return;if(zone->usage.gcBytes()>1024*1024&&zone->usage.gcBytes()>=zone->threshold.allocTrigger(schedulingState.inHighFrequencyGCMode())&&!isIncrementalGCInProgress()&&!isBackgroundSweeping()){PrepareZoneForGC(zone);startGC(GC_NORMAL,JS::gcreason::EAGER_ALLOC_TRIGGER);}}// Do all possible decommit immediately from the current thread without// releasing the GC lock or allocating any memory.voidGCRuntime::decommitAllWithoutUnlocking(constAutoLockGC&lock){MOZ_ASSERT(emptyChunks(lock).count()==0);for(ChunkPool::Iterchunk(availableChunks(lock));!chunk.done();chunk.next())chunk->decommitAllArenasWithoutUnlocking(lock);MOZ_ASSERT(availableChunks(lock).verify());}voidGCRuntime::startDecommit(){MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));MOZ_ASSERT(!decommitTask.isRunning());// If we are allocating heavily enough to trigger "high freqency" GC, then// skip decommit so that we do not compete with the mutator.if(schedulingState.inHighFrequencyGCMode())return;BackgroundDecommitTask::ChunkVectortoDecommit;{AutoLockGClock(rt);// Verify that all entries in the empty chunks pool are already decommitted.for(ChunkPool::Iterchunk(emptyChunks(lock));!chunk.done();chunk.next())MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);// Since we release the GC lock while doing the decommit syscall below,// it is dangerous to iterate the available list directly, as the active// thread could modify it concurrently. Instead, we build and pass an// explicit Vector containing the Chunks we want to visit.MOZ_ASSERT(availableChunks(lock).verify());for(ChunkPool::Iteriter(availableChunks(lock));!iter.done();iter.next()){if(!toDecommit.append(iter.get())){// The OOM handler does a full, immediate decommit.returnonOutOfMallocMemory(lock);}}}decommitTask.setChunksToScan(toDecommit);if(sweepOnBackgroundThread&&decommitTask.start())return;decommitTask.runFromActiveCooperatingThread(rt);}voidjs::gc::BackgroundDecommitTask::setChunksToScan(ChunkVector&chunks){MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));MOZ_ASSERT(!isRunning());MOZ_ASSERT(toDecommit.ref().empty());Swap(toDecommit.ref(),chunks);}/* virtual */voidjs::gc::BackgroundDecommitTask::run(){AutoLockGClock(runtime());for(Chunk*chunk:toDecommit.ref()){// The arena list is not doubly-linked, so we have to work in the free// list order and not in the natural order.while(chunk->info.numArenasFreeCommitted){boolok=chunk->decommitOneFreeArena(runtime(),lock);// If we are low enough on memory that we can't update the page// tables, or if we need to return for any other reason, break out// of the loop.if(cancel_||!ok)break;}}toDecommit.ref().clearAndFree();ChunkPooltoFree=runtime()->gc.expireEmptyChunkPool(lock);if(toFree.count()){AutoUnlockGCunlock(lock);FreeChunkPool(runtime(),toFree);}}voidGCRuntime::sweepBackgroundThings(ZoneList&zones,LifoAlloc&freeBlocks){freeBlocks.freeAll();if(zones.isEmpty())return;// We must finalize thing kinds in the order specified by BackgroundFinalizePhases.Arena*emptyArenas=nullptr;FreeOpfop(nullptr);for(unsignedphase=0;phase<ArrayLength(BackgroundFinalizePhases);++phase){for(Zone*zone=zones.front();zone;zone=zone->nextZone()){for(autokind:BackgroundFinalizePhases[phase].kinds){Arena*arenas=zone->arenas.arenaListsToSweep(kind);MOZ_RELEASE_ASSERT(uintptr_t(arenas)!=uintptr_t(-1));if(arenas)ArenaLists::backgroundFinalize(&fop,arenas,&emptyArenas);}}}AutoLockGClock(rt);// Release swept arenas, dropping and reaquiring the lock every so often to// avoid blocking the active thread from allocating chunks.staticconstsize_tLockReleasePeriod=32;size_treleaseCount=0;Arena*next;for(Arena*arena=emptyArenas;arena;arena=next){next=arena->next;rt->gc.releaseArena(arena,lock);releaseCount++;if(releaseCount%LockReleasePeriod==0){lock.unlock();lock.lock();}}while(!zones.isEmpty())zones.removeFront();}voidGCRuntime::assertBackgroundSweepingFinished(){#ifdef DEBUGMOZ_ASSERT(backgroundSweepZones.ref().isEmpty());for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){for(autoi:AllAllocKinds()){MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));MOZ_ASSERT(zone->arenas.doneBackgroundFinalize(i));}}MOZ_ASSERT(blocksToFreeAfterSweeping.ref().computedSizeOfExcludingThis()==0);#endif}voidGCHelperState::finish(){// Wait for any lingering background sweeping to finish.waitBackgroundSweepEnd();}GCHelperState::StateGCHelperState::state(constAutoLockGC&){returnstate_;}voidGCHelperState::setState(Statestate,constAutoLockGC&){state_=state;}voidGCHelperState::startBackgroundThread(StatenewState,constAutoLockGC&lock,constAutoLockHelperThreadState&helperLock){MOZ_ASSERT(!hasThread&&state(lock)==IDLE&&newState!=IDLE);setState(newState,lock);{AutoEnterOOMUnsafeRegionnoOOM;if(!HelperThreadState().gcHelperWorklist(helperLock).append(this))noOOM.crash("Could not add to pending GC helpers list");}HelperThreadState().notifyAll(GlobalHelperThreadState::PRODUCER,helperLock);}voidGCHelperState::waitForBackgroundThread(js::AutoLockGC&lock){while(isBackgroundSweeping())done.wait(lock.guard());}voidGCHelperState::work(){MOZ_ASSERT(CanUseExtraThreads());AutoLockGClock(rt);MOZ_ASSERT(!hasThread);hasThread=true;#ifdef DEBUGMOZ_ASSERT(!TlsContext.get()->gcHelperStateThread);TlsContext.get()->gcHelperStateThread=true;#endifTraceLoggerThread*logger=TraceLoggerForCurrentThread();switch(state(lock)){caseIDLE:MOZ_CRASH("GC helper triggered on idle state");break;caseSWEEPING:{AutoTraceLoglogSweeping(logger,TraceLogger_GCSweeping);doSweep(lock);MOZ_ASSERT(state(lock)==SWEEPING);break;}}setState(IDLE,lock);hasThread=false;#ifdef DEBUGTlsContext.get()->gcHelperStateThread=false;#endifdone.notify_all();}voidGCRuntime::queueZonesForBackgroundSweep(ZoneList&zones){AutoLockHelperThreadStatehelperLock;AutoLockGClock(rt);backgroundSweepZones.ref().transferFrom(zones);helperState.maybeStartBackgroundSweep(lock,helperLock);}voidGCRuntime::freeUnusedLifoBlocksAfterSweeping(LifoAlloc*lifo){MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());AutoLockGClock(rt);blocksToFreeAfterSweeping.ref().transferUnusedFrom(lifo);}voidGCRuntime::freeAllLifoBlocksAfterSweeping(LifoAlloc*lifo){MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());AutoLockGClock(rt);blocksToFreeAfterSweeping.ref().transferFrom(lifo);}voidGCRuntime::freeAllLifoBlocksAfterMinorGC(LifoAlloc*lifo){blocksToFreeAfterMinorGC.ref().transferFrom(lifo);}voidGCHelperState::maybeStartBackgroundSweep(constAutoLockGC&lock,constAutoLockHelperThreadState&helperLock){MOZ_ASSERT(CanUseExtraThreads());if(state(lock)==IDLE)startBackgroundThread(SWEEPING,lock,helperLock);}voidGCHelperState::waitBackgroundSweepEnd(){AutoLockGClock(rt);while(state(lock)==SWEEPING)waitForBackgroundThread(lock);if(!rt->gc.isIncrementalGCInProgress())rt->gc.assertBackgroundSweepingFinished();}voidGCHelperState::doSweep(AutoLockGC&lock){// The active thread may call queueZonesForBackgroundSweep() while this is// running so we must check there is no more work to do before exiting.do{while(!rt->gc.backgroundSweepZones.ref().isEmpty()){AutoSetThreadIsSweepingthreadIsSweeping;ZoneListzones;zones.transferFrom(rt->gc.backgroundSweepZones.ref());LifoAllocfreeLifoAlloc(JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE);freeLifoAlloc.transferFrom(&rt->gc.blocksToFreeAfterSweeping.ref());AutoUnlockGCunlock(lock);rt->gc.sweepBackgroundThings(zones,freeLifoAlloc);}}while(!rt->gc.backgroundSweepZones.ref().isEmpty());}#ifdef DEBUGboolGCHelperState::onBackgroundThread(){returnTlsContext.get()->gcHelperStateThread;}#endif // DEBUGboolGCRuntime::shouldReleaseObservedTypes(){boolreleaseTypes=false;#ifdef JS_GC_ZEALif(zealModeBits!=0)releaseTypes=true;#endif/* We may miss the exact target GC due to resets. */if(majorGCNumber>=jitReleaseNumber)releaseTypes=true;if(releaseTypes)jitReleaseNumber=majorGCNumber+JIT_SCRIPT_RELEASE_TYPES_PERIOD;returnreleaseTypes;}structIsAboutToBeFinalizedFunctor{template<typenameT>booloperator()(Cell**t){mozilla::DebugOnly<constCell*>prior=*t;boolresult=IsAboutToBeFinalizedUnbarriered(reinterpret_cast<T**>(t));// Sweep should not have to deal with moved pointers, since moving GC// handles updating the UID table manually.MOZ_ASSERT(*t==prior);returnresult;}};/* static */boolUniqueIdGCPolicy::needsSweep(Cell**cell,uint64_t*){returnDispatchTraceKindTyped(IsAboutToBeFinalizedFunctor(),(*cell)->getTraceKind(),cell);}voidJS::Zone::sweepUniqueIds(js::FreeOp*fop){uniqueIds().sweep();}/* * It's simpler if we preserve the invariant that every zone has at least one * compartment. If we know we're deleting the entire zone, then * SweepCompartments is allowed to delete all compartments. In this case, * |keepAtleastOne| is false. If some objects remain in the zone so that it * cannot be deleted, then we set |keepAtleastOne| to true, which prohibits * SweepCompartments from deleting every compartment. Instead, it preserves an * arbitrary compartment in the zone. */voidZone::sweepCompartments(FreeOp*fop,boolkeepAtleastOne,booldestroyingRuntime){JSRuntime*rt=runtimeFromActiveCooperatingThread();JSDestroyCompartmentCallbackcallback=rt->destroyCompartmentCallback;JSCompartment**read=compartments().begin();JSCompartment**end=compartments().end();JSCompartment**write=read;boolfoundOne=false;while(read<end){JSCompartment*comp=*read++;MOZ_ASSERT(!rt->isAtomsCompartment(comp));/* * Don't delete the last compartment if all the ones before it were * deleted and keepAtleastOne is true. */booldontDelete=read==end&&!foundOne&&keepAtleastOne;if((!comp->marked&&!dontDelete)||destroyingRuntime){if(callback)callback(fop,comp);if(comp->principals())JS_DropPrincipals(TlsContext.get(),comp->principals());js_delete(comp);rt->gc.stats().sweptCompartment();}else{*write++=comp;foundOne=true;}}compartments().shrinkTo(write-compartments().begin());MOZ_ASSERT_IF(keepAtleastOne,!compartments().empty());}voidGCRuntime::sweepZones(FreeOp*fop,ZoneGroup*group,booldestroyingRuntime){Zone**read=group->zones().begin();Zone**end=group->zones().end();Zone**write=read;while(read<end){Zone*zone=*read++;if(zone->wasGCStarted()){MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());constboolzoneIsDead=zone->arenas.arenaListsAreEmpty()&&!zone->hasMarkedCompartments();if(zoneIsDead||destroyingRuntime){// We have just finished sweeping, so we should have freed any// empty arenas back to their Chunk for future allocation.zone->arenas.checkEmptyFreeLists();// We are about to delete the Zone; this will leave the Zone*// in the arena header dangling if there are any arenas// remaining at this point.#ifdef DEBUGif(!zone->arenas.checkEmptyArenaLists())arenasEmptyAtShutdown=false;#endifzone->sweepCompartments(fop,false,destroyingRuntime);MOZ_ASSERT(zone->compartments().empty());MOZ_ASSERT_IF(arenasEmptyAtShutdown,zone->typeDescrObjects().empty());fop->delete_(zone);stats().sweptZone();continue;}zone->sweepCompartments(fop,true,destroyingRuntime);}*write++=zone;}group->zones().shrinkTo(write-group->zones().begin());}voidGCRuntime::sweepZoneGroups(FreeOp*fop,booldestroyingRuntime){MOZ_ASSERT_IF(destroyingRuntime,numActiveZoneIters==0);MOZ_ASSERT_IF(destroyingRuntime,arenasEmptyAtShutdown);if(rt->gc.numActiveZoneIters)return;assertBackgroundSweepingFinished();ZoneGroup**read=groups.ref().begin();ZoneGroup**end=groups.ref().end();ZoneGroup**write=read;while(read<end){ZoneGroup*group=*read++;sweepZones(fop,group,destroyingRuntime);if(group->zones().empty()){MOZ_ASSERT(numActiveZoneIters==0);fop->delete_(group);}else{*write++=group;}}groups.ref().shrinkTo(write-groups.ref().begin());}#ifdef DEBUGstaticconstchar*AllocKindToAscii(AllocKindkind){switch(kind){#define MAKE_CASE(allocKind, traceKind, type, sizedType) \ case AllocKind:: allocKind: return #allocKind;FOR_EACH_ALLOCKIND(MAKE_CASE)#undef MAKE_CASEdefault:MOZ_CRASH("Unknown AllocKind in AllocKindToAscii");}}#endif // DEBUGboolArenaLists::checkEmptyArenaList(AllocKindkind){size_tnum_live=0;#ifdef DEBUGif(!arenaLists(kind).isEmpty()){size_tmax_cells=20;char*env=getenv("JS_GC_MAX_LIVE_CELLS");if(env&&*env)max_cells=atol(env);for(Arena*current=arenaLists(kind).head();current;current=current->next){for(ArenaCellIterUnderGCi(current);!i.done();i.next()){TenuredCell*t=i.getCell();MOZ_ASSERT(t->isMarked(),"unmarked cells should have been finalized");if(++num_live<=max_cells){fprintf(stderr,"ERROR: GC found live Cell %p of kind %s at shutdown\n",t,AllocKindToAscii(kind));}}}fprintf(stderr,"ERROR: GC found %"PRIuSIZE" live Cells at shutdown\n",num_live);}#endif // DEBUGreturnnum_live==0;}classMOZ_RAIIjs::gc::AutoRunParallelTask:publicGCParallelTask{usingFunc=void(*)(JSRuntime*);Funcfunc_;gcstats::PhaseKindphase_;AutoLockHelperThreadState&lock_;public:AutoRunParallelTask(JSRuntime*rt,Funcfunc,gcstats::PhaseKindphase,AutoLockHelperThreadState&lock):GCParallelTask(rt),func_(func),phase_(phase),lock_(lock){runtime()->gc.startTask(*this,phase_,lock_);}~AutoRunParallelTask(){runtime()->gc.joinTask(*this,phase_,lock_);}voidrun()override{func_(runtime());}};voidGCRuntime::purgeRuntime(AutoLockForExclusiveAccess&lock){gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::PURGE);for(GCCompartmentsItercomp(rt);!comp.done();comp.next())comp->purge();for(GCZonesIterzone(rt);!zone.done();zone.next()){zone->atomCache().clearAndShrink();zone->externalStringCache().purge();}for(constCooperatingContext&target:rt->cooperatingContexts()){freeUnusedLifoBlocksAfterSweeping(&target.context()->tempLifoAlloc());target.context()->interpreterStack().purge(rt);target.context()->frontendCollectionPool().purge();}rt->caches().gsnCache.purge();rt->caches().envCoordinateNameCache.purge();rt->caches().newObjectCache.purge();rt->caches().nativeIterCache.purge();rt->caches().uncompressedSourceCache.purge();if(rt->caches().evalCache.initialized())rt->caches().evalCache.clear();if(autocache=rt->maybeThisRuntimeSharedImmutableStrings())cache->purge();rt->promiseTasksToDestroy.lock()->clear();MOZ_ASSERT(unmarkGrayStack.empty());unmarkGrayStack.clearAndFree();}boolGCRuntime::shouldPreserveJITCode(JSCompartment*comp,int64_tcurrentTime,JS::gcreason::Reasonreason,boolcanAllocateMoreCode){if(cleanUpEverything)returnfalse;if(!canAllocateMoreCode)returnfalse;if(alwaysPreserveCode)returntrue;if(comp->preserveJitCode())returntrue;if(comp->lastAnimationTime+PRMJ_USEC_PER_SEC>=currentTime)returntrue;if(reason==JS::gcreason::DEBUG_GC)returntrue;returnfalse;}#ifdef DEBUGclassCompartmentCheckTracer:publicJS::CallbackTracer{voidonChild(constJS::GCCellPtr&thing)override;public:explicitCompartmentCheckTracer(JSRuntime*rt):JS::CallbackTracer(rt),src(nullptr),zone(nullptr),compartment(nullptr){}Cell*src;JS::TraceKindsrcKind;Zone*zone;JSCompartment*compartment;};namespace{structIsDestComparatorFunctor{JS::GCCellPtrdst_;explicitIsDestComparatorFunctor(JS::GCCellPtrdst):dst_(dst){}template<typenameT>booloperator()(T*t){return(*t)==dst_.asCell();}};}// namespace (anonymous)staticboolInCrossCompartmentMap(JSObject*src,JS::GCCellPtrdst){JSCompartment*srccomp=src->compartment();if(dst.is<JSObject>()){Valuekey=ObjectValue(dst.as<JSObject>());if(WrapperMap::Ptrp=srccomp->lookupWrapper(key)){if(*p->value().unsafeGet()==ObjectValue(*src))returntrue;}}/* * If the cross-compartment edge is caused by the debugger, then we don't * know the right hashtable key, so we have to iterate. */for(JSCompartment::WrapperEnume(srccomp);!e.empty();e.popFront()){if(e.front().mutableKey().applyToWrapped(IsDestComparatorFunctor(dst))&&ToMarkable(e.front().value().unbarrieredGet())==src){returntrue;}}returnfalse;}structMaybeCompartmentFunctor{template<typenameT>JSCompartment*operator()(T*t){returnt->maybeCompartment();}};voidCompartmentCheckTracer::onChild(constJS::GCCellPtr&thing){JSCompartment*comp=DispatchTyped(MaybeCompartmentFunctor(),thing);if(comp&&compartment){MOZ_ASSERT(comp==compartment||runtime()->isAtomsCompartment(comp)||(srcKind==JS::TraceKind::Object&&InCrossCompartmentMap(static_cast<JSObject*>(src),thing)));}else{TenuredCell*tenured=TenuredCell::fromPointer(thing.asCell());Zone*thingZone=tenured->zoneFromAnyThread();MOZ_ASSERT(thingZone==zone||thingZone->isAtomsZone());}}voidGCRuntime::checkForCompartmentMismatches(){if(TlsContext.get()->disableStrictProxyCheckingCount)return;CompartmentCheckTracertrc(rt);AutoAssertEmptyNurseryempty(TlsContext.get());for(ZonesIterzone(rt,SkipAtoms);!zone.done();zone.next()){trc.zone=zone;for(autothingKind:AllAllocKinds()){for(autoi=zone->cellIter<TenuredCell>(thingKind,empty);!i.done();i.next()){trc.src=i.getCell();trc.srcKind=MapAllocToTraceKind(thingKind);trc.compartment=DispatchTraceKindTyped(MaybeCompartmentFunctor(),trc.src,trc.srcKind);js::TraceChildren(&trc,trc.src,trc.srcKind);}}}}#endifstaticvoidRelazifyFunctions(Zone*zone,AllocKindkind){MOZ_ASSERT(kind==AllocKind::FUNCTION||kind==AllocKind::FUNCTION_EXTENDED);AutoAssertEmptyNurseryempty(TlsContext.get());JSRuntime*rt=zone->runtimeFromActiveCooperatingThread();for(autoi=zone->cellIter<JSObject>(kind,empty);!i.done();i.next()){JSFunction*fun=&i->as<JSFunction>();if(fun->hasScript())fun->maybeRelazify(rt);}}staticboolShouldCollectZone(Zone*zone,JS::gcreason::Reasonreason){// Normally we collect all scheduled zones.if(reason!=JS::gcreason::COMPARTMENT_REVIVED)returnzone->isGCScheduled();// If we are repeating a GC becuase we noticed dead compartments haven't// been collected, then only collect zones contianing those compartments.for(CompartmentsInZoneItercomp(zone);!comp.done();comp.next()){if(comp->scheduledForDestruction)returntrue;}returnfalse;}boolGCRuntime::prepareZonesForCollection(JS::gcreason::Reasonreason,bool*isFullOut,AutoLockForExclusiveAccess&lock){#ifdef DEBUG/* Assert that zone state is as we expect */for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){MOZ_ASSERT(!zone->isCollecting());MOZ_ASSERT(!zone->compartments().empty());for(autoi:AllAllocKinds())MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));}#endif*isFullOut=true;boolany=false;int64_tcurrentTime=PRMJ_Now();for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){/* Set up which zones will be collected. */if(ShouldCollectZone(zone,reason)){if(!zone->isAtomsZone()){any=true;zone->setGCState(Zone::Mark);}}else{*isFullOut=false;}zone->setPreservingCode(false);}// Discard JIT code more aggressively if the process is approaching its// executable code limit.boolcanAllocateMoreCode=jit::CanLikelyAllocateMoreExecutableMemory();for(CompartmentsIterc(rt,WithAtoms);!c.done();c.next()){c->marked=false;c->scheduledForDestruction=false;c->maybeAlive=c->hasBeenEntered()||!c->zone()->isGCScheduled();if(shouldPreserveJITCode(c,currentTime,reason,canAllocateMoreCode))c->zone()->setPreservingCode(true);}if(!cleanUpEverything&&canAllocateMoreCode){jit::JitActivationIteratoractivation(TlsContext.get());if(!activation.done())activation->compartment()->zone()->setPreservingCode(true);}/* * If keepAtoms() is true then either an instance of AutoKeepAtoms is * currently on the stack or parsing is currently happening on another * thread. In either case we don't have information about which atoms are * roots, so we must skip collecting atoms. * * Note that only affects the first slice of an incremental GC since root * marking is completed before we return to the mutator. * * Off-thread parsing is inhibited after the start of GC which prevents * races between creating atoms during parsing and sweeping atoms on the * active thread. * * Otherwise, we always schedule a GC in the atoms zone so that atoms which * the other collected zones are using are marked, and we can update the * set of atoms in use by the other collected zones at the end of the GC. */if(!TlsContext.get()->keepAtoms||rt->hasHelperThreadZones()){Zone*atomsZone=rt->atomsCompartment(lock)->zone();if(atomsZone->isGCScheduled()){MOZ_ASSERT(!atomsZone->isCollecting());atomsZone->setGCState(Zone::Mark);any=true;}}/* Check that at least one zone is scheduled for collection. */returnany;}staticvoidDiscardJITCodeForIncrementalGC(JSRuntime*rt){js::CancelOffThreadIonCompile(rt,JS::Zone::Mark);for(GCZonesIterzone(rt);!zone.done();zone.next()){gcstats::AutoPhaseap(rt->gc.stats(),gcstats::PhaseKind::MARK_DISCARD_CODE);zone->discardJitCode(rt->defaultFreeOp());}}staticvoidRelazifyFunctionsForShrinkingGC(JSRuntime*rt){gcstats::AutoPhaseap(rt->gc.stats(),gcstats::PhaseKind::RELAZIFY_FUNCTIONS);for(GCZonesIterzone(rt);!zone.done();zone.next()){if(zone->isSelfHostingZone())continue;RelazifyFunctions(zone,AllocKind::FUNCTION);RelazifyFunctions(zone,AllocKind::FUNCTION_EXTENDED);}}staticvoidPurgeShapeTablesForShrinkingGC(JSRuntime*rt){gcstats::AutoPhaseap(rt->gc.stats(),gcstats::PhaseKind::PURGE_SHAPE_TABLES);for(GCZonesIterzone(rt);!zone.done();zone.next()){if(zone->keepShapeTables()||zone->isSelfHostingZone())continue;for(autobaseShape=zone->cellIter<BaseShape>();!baseShape.done();baseShape.next())baseShape->maybePurgeTable();}}staticvoidUnmarkCollectedZones(JSRuntime*rt){for(GCZonesIterzone(rt);!zone.done();zone.next()){/* Unmark everything in the zones being collected. */zone->arenas.unmarkAll();}for(GCZonesIterzone(rt);!zone.done();zone.next()){/* Unmark all weak maps in the zones being collected. */WeakMapBase::unmarkZone(zone);}}staticvoidBufferGrayRoots(JSRuntime*rt){rt->gc.bufferGrayRoots();}boolGCRuntime::beginMarkPhase(JS::gcreason::Reasonreason,AutoLockForExclusiveAccess&lock){#ifdef DEBUGif(fullCompartmentChecks)checkForCompartmentMismatches();#endifif(!prepareZonesForCollection(reason,&isFull.ref(),lock))returnfalse;/* * Ensure that after the start of a collection we don't allocate into any * existing arenas, as this can cause unreachable things to be marked. */if(isIncremental){for(GCZonesIterzone(rt);!zone.done();zone.next())zone->arenas.prepareForIncrementalGC();}MemProfiler::MarkTenuredStart(rt);marker.start();GCMarker*gcmarker=▮{gcstats::AutoPhaseap1(stats(),gcstats::PhaseKind::PREPARE);AutoLockHelperThreadStatehelperLock;/* * Clear all mark state for the zones we are collecting. This is linear * in the size of the heap we are collecting and so can be slow. Do this * in parallel with the rest of this block. */AutoRunParallelTaskunmarkCollectedZones(rt,UnmarkCollectedZones,gcstats::PhaseKind::UNMARK,helperLock);/* * Buffer gray roots for incremental collections. This is linear in the * number of roots which can be in the tens of thousands. Do this in * parallel with the rest of this block. */Maybe<AutoRunParallelTask>bufferGrayRoots;if(isIncremental)bufferGrayRoots.emplace(rt,BufferGrayRoots,gcstats::PhaseKind::BUFFER_GRAY_ROOTS,helperLock);AutoUnlockHelperThreadStateunlock(helperLock);/* * Discard JIT code for incremental collections (for non-incremental * collections the following sweep discards the jit code). */if(isIncremental)DiscardJITCodeForIncrementalGC(rt);/* * Relazify functions after discarding JIT code (we can't relazify * functions with JIT code) and before the actual mark phase, so that * the current GC can collect the JSScripts we're unlinking here. We do * this only when we're performing a shrinking GC, as too much * relazification can cause performance issues when we have to reparse * the same functions over and over. */if(invocationKind==GC_SHRINK){RelazifyFunctionsForShrinkingGC(rt);PurgeShapeTablesForShrinkingGC(rt);}/* * We must purge the runtime at the beginning of an incremental GC. The * danger if we purge later is that the snapshot invariant of * incremental GC will be broken, as follows. If some object is * reachable only through some cache (say the dtoaCache) then it will * not be part of the snapshot. If we purge after root marking, then * the mutator could obtain a pointer to the object and start using * it. This object might never be marked, so a GC hazard would exist. */purgeRuntime(lock);}/* * Mark phase. */gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::MARK);traceRuntimeForMajorGC(gcmarker,lock);if(isIncremental)markCompartments();/* * Process any queued source compressions during the start of a major * GC. */{AutoLockHelperThreadStatehelperLock;HelperThreadState().startHandlingCompressionTasks(helperLock);}returntrue;}voidGCRuntime::markCompartments(){gcstats::AutoPhaseap1(stats(),gcstats::PhaseKind::MARK_ROOTS);gcstats::AutoPhaseap2(stats(),gcstats::PhaseKind::MARK_COMPARTMENTS);/* * This code ensures that if a compartment is "dead", then it will be * collected in this GC. A compartment is considered dead if its maybeAlive * flag is false. The maybeAlive flag is set if: * * (1) the compartment has been entered (set in beginMarkPhase() above) * (2) the compartment is not being collected (set in beginMarkPhase() * above) * (3) an object in the compartment was marked during root marking, either * as a black root or a gray root (set in RootMarking.cpp), or * (4) the compartment has incoming cross-compartment edges from another * compartment that has maybeAlive set (set by this method). * * If the maybeAlive is false, then we set the scheduledForDestruction flag. * At the end of the GC, we look for compartments where * scheduledForDestruction is true. These are compartments that were somehow * "revived" during the incremental GC. If any are found, we do a special, * non-incremental GC of those compartments to try to collect them. * * Compartments can be revived for a variety of reasons. On reason is bug * 811587, where a reflector that was dead can be revived by DOM code that * still refers to the underlying DOM node. * * Read barriers and allocations can also cause revival. This might happen * during a function like JS_TransplantObject, which iterates over all * compartments, live or dead, and operates on their objects. See bug 803376 * for details on this problem. To avoid the problem, we try to avoid * allocation and read barriers during JS_TransplantObject and the like. *//* Propagate the maybeAlive flag via cross-compartment edges. */Vector<JSCompartment*,0,js::SystemAllocPolicy>workList;for(CompartmentsItercomp(rt,SkipAtoms);!comp.done();comp.next()){if(comp->maybeAlive){if(!workList.append(comp))return;}}while(!workList.empty()){JSCompartment*comp=workList.popCopy();for(JSCompartment::NonStringWrapperEnume(comp);!e.empty();e.popFront()){JSCompartment*dest=e.front().mutableKey().compartment();if(dest&&!dest->maybeAlive){dest->maybeAlive=true;if(!workList.append(dest))return;}}}/* Set scheduleForDestruction based on maybeAlive. */for(GCCompartmentsItercomp(rt);!comp.done();comp.next()){MOZ_ASSERT(!comp->scheduledForDestruction);if(!comp->maybeAlive&&!rt->isAtomsCompartment(comp))comp->scheduledForDestruction=true;}}template<classZoneIterT>voidGCRuntime::markWeakReferences(gcstats::PhaseKindphase){MOZ_ASSERT(marker.isDrained());gcstats::AutoPhaseap1(stats(),phase);marker.enterWeakMarkingMode();// TODO bug 1167452: Make weak marking incrementalautounlimited=SliceBudget::unlimited();MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));for(;;){boolmarkedAny=false;if(!marker.isWeakMarkingTracer()){for(ZoneIterTzone(rt);!zone.done();zone.next())markedAny|=WeakMapBase::markZoneIteratively(zone,&marker);}for(CompartmentsIterT<ZoneIterT>c(rt);!c.done();c.next()){if(c->watchpointMap)markedAny|=c->watchpointMap->markIteratively(&marker);}markedAny|=Debugger::markIteratively(&marker);markedAny|=jit::JitRuntime::MarkJitcodeGlobalTableIteratively(&marker);if(!markedAny)break;autounlimited=SliceBudget::unlimited();MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));}MOZ_ASSERT(marker.isDrained());marker.leaveWeakMarkingMode();}voidGCRuntime::markWeakReferencesInCurrentGroup(gcstats::PhaseKindphase){markWeakReferences<GCSweepGroupIter>(phase);}template<classZoneIterT,classCompartmentIterT>voidGCRuntime::markGrayReferences(gcstats::PhaseKindphase){gcstats::AutoPhaseap(stats(),phase);if(hasBufferedGrayRoots()){for(ZoneIterTzone(rt);!zone.done();zone.next())markBufferedGrayRoots(zone);}else{MOZ_ASSERT(!isIncremental);if(JSTraceDataOpop=grayRootTracer.op)(*op)(&marker,grayRootTracer.data);}autounlimited=SliceBudget::unlimited();MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));}voidGCRuntime::markGrayReferencesInCurrentGroup(gcstats::PhaseKindphase){markGrayReferences<GCSweepGroupIter,GCCompartmentGroupIter>(phase);}voidGCRuntime::markAllWeakReferences(gcstats::PhaseKindphase){markWeakReferences<GCZonesIter>(phase);}voidGCRuntime::markAllGrayReferences(gcstats::PhaseKindphase){markGrayReferences<GCZonesIter,GCCompartmentsIter>(phase);}#ifdef JS_GC_ZEALstructGCChunkHasher{typedefgc::Chunk*Lookup;/* * Strip zeros for better distribution after multiplying by the golden * ratio. */staticHashNumberhash(gc::Chunk*chunk){MOZ_ASSERT(!(uintptr_t(chunk)&gc::ChunkMask));returnHashNumber(uintptr_t(chunk)>>gc::ChunkShift);}staticboolmatch(gc::Chunk*k,gc::Chunk*l){MOZ_ASSERT(!(uintptr_t(k)&gc::ChunkMask));MOZ_ASSERT(!(uintptr_t(l)&gc::ChunkMask));returnk==l;}};classjs::gc::MarkingValidator{public:explicitMarkingValidator(GCRuntime*gc);~MarkingValidator();voidnonIncrementalMark(AutoLockForExclusiveAccess&lock);voidvalidate();private:GCRuntime*gc;boolinitialized;typedefHashMap<Chunk*,ChunkBitmap*,GCChunkHasher,SystemAllocPolicy>BitmapMap;BitmapMapmap;};js::gc::MarkingValidator::MarkingValidator(GCRuntime*gc):gc(gc),initialized(false){}js::gc::MarkingValidator::~MarkingValidator(){if(!map.initialized())return;for(BitmapMap::Ranger(map.all());!r.empty();r.popFront())js_delete(r.front().value());}voidjs::gc::MarkingValidator::nonIncrementalMark(AutoLockForExclusiveAccess&lock){/* * Perform a non-incremental mark for all collecting zones and record * the results for later comparison. * * Currently this does not validate gray marking. */if(!map.init())return;JSRuntime*runtime=gc->rt;GCMarker*gcmarker=&gc->marker;gc->waitBackgroundSweepEnd();/* Save existing mark bits. */for(autochunk=gc->allNonEmptyChunks();!chunk.done();chunk.next()){ChunkBitmap*bitmap=&chunk->bitmap;ChunkBitmap*entry=js_new<ChunkBitmap>();if(!entry)return;memcpy((void*)entry->bitmap,(void*)bitmap->bitmap,sizeof(bitmap->bitmap));if(!map.putNew(chunk,entry))return;}/* * Temporarily clear the weakmaps' mark flags for the compartments we are * collecting. */WeakMapSetmarkedWeakMaps;if(!markedWeakMaps.init())return;/* * For saving, smush all of the keys into one big table and split them back * up into per-zone tables when restoring. */gc::WeakKeyTablesavedWeakKeys(SystemAllocPolicy(),runtime->randomHashCodeScrambler());if(!savedWeakKeys.init())return;for(GCZonesIterzone(runtime);!zone.done();zone.next()){if(!WeakMapBase::saveZoneMarkedWeakMaps(zone,markedWeakMaps))return;AutoEnterOOMUnsafeRegionoomUnsafe;for(gc::WeakKeyTable::Ranger=zone->gcWeakKeys().all();!r.empty();r.popFront()){if(!savedWeakKeys.put(Move(r.front().key),Move(r.front().value)))oomUnsafe.crash("saving weak keys table for validator");}if(!zone->gcWeakKeys().clear())oomUnsafe.crash("clearing weak keys table for validator");}/* * After this point, the function should run to completion, so we shouldn't * do anything fallible. */initialized=true;/* Re-do all the marking, but non-incrementally. */js::gc::Statestate=gc->incrementalState;gc->incrementalState=State::MarkRoots;{gcstats::AutoPhaseap(gc->stats(),gcstats::PhaseKind::PREPARE);{gcstats::AutoPhaseap(gc->stats(),gcstats::PhaseKind::UNMARK);for(GCZonesIterzone(runtime);!zone.done();zone.next())WeakMapBase::unmarkZone(zone);MOZ_ASSERT(gcmarker->isDrained());gcmarker->reset();for(autochunk=gc->allNonEmptyChunks();!chunk.done();chunk.next())chunk->bitmap.clear();}}{gcstats::AutoPhaseap(gc->stats(),gcstats::PhaseKind::MARK);gc->traceRuntimeForMajorGC(gcmarker,lock);gc->incrementalState=State::Mark;autounlimited=SliceBudget::unlimited();MOZ_RELEASE_ASSERT(gc->marker.drainMarkStack(unlimited));}gc->incrementalState=State::Sweep;{gcstats::AutoPhaseap1(gc->stats(),gcstats::PhaseKind::SWEEP);gcstats::AutoPhaseap2(gc->stats(),gcstats::PhaseKind::SWEEP_MARK);gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_WEAK);/* Update zone state for gray marking. */for(GCZonesIterzone(runtime);!zone.done();zone.next()){MOZ_ASSERT(zone->isGCMarkingBlack());zone->setGCState(Zone::MarkGray);}gc->marker.setMarkColorGray();gc->markAllGrayReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY);gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);/* Restore zone state. */for(GCZonesIterzone(runtime);!zone.done();zone.next()){MOZ_ASSERT(zone->isGCMarkingGray());zone->setGCState(Zone::Mark);}MOZ_ASSERT(gc->marker.isDrained());gc->marker.setMarkColorBlack();}/* Take a copy of the non-incremental mark state and restore the original. */for(autochunk=gc->allNonEmptyChunks();!chunk.done();chunk.next()){ChunkBitmap*bitmap=&chunk->bitmap;ChunkBitmap*entry=map.lookup(chunk)->value();Swap(*entry,*bitmap);}for(GCZonesIterzone(runtime);!zone.done();zone.next()){WeakMapBase::unmarkZone(zone);AutoEnterOOMUnsafeRegionoomUnsafe;if(!zone->gcWeakKeys().clear())oomUnsafe.crash("clearing weak keys table for validator");}WeakMapBase::restoreMarkedWeakMaps(markedWeakMaps);for(gc::WeakKeyTable::Ranger=savedWeakKeys.all();!r.empty();r.popFront()){AutoEnterOOMUnsafeRegionoomUnsafe;Zone*zone=gc::TenuredCell::fromPointer(r.front().key.asCell())->zone();if(!zone->gcWeakKeys().put(Move(r.front().key),Move(r.front().value)))oomUnsafe.crash("restoring weak keys table for validator");}gc->incrementalState=state;}voidjs::gc::MarkingValidator::validate(){/* * Validates the incremental marking for a single compartment by comparing * the mark bits to those previously recorded for a non-incremental mark. */if(!initialized)return;gc->waitBackgroundSweepEnd();for(autochunk=gc->allNonEmptyChunks();!chunk.done();chunk.next()){BitmapMap::Ptrptr=map.lookup(chunk);if(!ptr)continue;/* Allocated after we did the non-incremental mark. */ChunkBitmap*bitmap=ptr->value();ChunkBitmap*incBitmap=&chunk->bitmap;for(size_ti=0;i<ArenasPerChunk;i++){if(chunk->decommittedArenas.get(i))continue;Arena*arena=&chunk->arenas[i];if(!arena->allocated())continue;if(!arena->zone->isGCSweeping())continue;if(arena->allocatedDuringIncremental)continue;AllocKindkind=arena->getAllocKind();uintptr_tthing=arena->thingsStart();uintptr_tend=arena->thingsEnd();while(thing<end){Cell*cell=(Cell*)thing;/* * If a non-incremental GC wouldn't have collected a cell, then * an incremental GC won't collect it. */if(bitmap->isMarked(cell,BLACK))MOZ_RELEASE_ASSERT(incBitmap->isMarked(cell,BLACK));/* * If the cycle collector isn't allowed to collect an object * after a non-incremental GC has run, then it isn't allowed to * collected it after an incremental GC. */if(!bitmap->isMarked(cell,GRAY))MOZ_RELEASE_ASSERT(!incBitmap->isMarked(cell,GRAY));thing+=Arena::thingSize(kind);}}}}#endif // JS_GC_ZEALvoidGCRuntime::computeNonIncrementalMarkingForValidation(AutoLockForExclusiveAccess&lock){#ifdef JS_GC_ZEALMOZ_ASSERT(!markingValidator);if(isIncremental&&hasZealMode(ZealMode::IncrementalMarkingValidator))markingValidator=js_new<MarkingValidator>(this);if(markingValidator)markingValidator->nonIncrementalMark(lock);#endif}voidGCRuntime::validateIncrementalMarking(){#ifdef JS_GC_ZEALif(markingValidator)markingValidator->validate();#endif}voidGCRuntime::finishMarkingValidation(){#ifdef JS_GC_ZEALjs_delete(markingValidator.ref());markingValidator=nullptr;#endif}staticvoidDropStringWrappers(JSRuntime*rt){/* * String "wrappers" are dropped on GC because their presence would require * us to sweep the wrappers in all compartments every time we sweep a * compartment group. */for(CompartmentsIterc(rt,SkipAtoms);!c.done();c.next()){for(JSCompartment::StringWrapperEnume(c);!e.empty();e.popFront()){MOZ_ASSERT(e.front().key().is<JSString*>());e.removeFront();}}}/* * Group zones that must be swept at the same time. * * If compartment A has an edge to an unmarked object in compartment B, then we * must not sweep A in a later slice than we sweep B. That's because a write * barrier in A could lead to the unmarked object in B becoming marked. * However, if we had already swept that object, we would be in trouble. * * If we consider these dependencies as a graph, then all the compartments in * any strongly-connected component of this graph must be swept in the same * slice. * * Tarjan's algorithm is used to calculate the components. */namespace{structAddOutgoingEdgeFunctor{boolneedsEdge_;ZoneComponentFinder&finder_;AddOutgoingEdgeFunctor(boolneedsEdge,ZoneComponentFinder&finder):needsEdge_(needsEdge),finder_(finder){}template<typenameT>voidoperator()(Ttp){TenuredCell&other=(*tp)->asTenured();/* * Add edge to wrapped object compartment if wrapped object is not * marked black to indicate that wrapper compartment not be swept * after wrapped compartment. */if(needsEdge_){JS::Zone*zone=other.zone();if(zone->isGCMarking())finder_.addEdgeTo(zone);}}};}// namespace (anonymous)voidJSCompartment::findOutgoingEdges(ZoneComponentFinder&finder){for(js::WrapperMap::Enume(crossCompartmentWrappers);!e.empty();e.popFront()){CrossCompartmentKey&key=e.front().mutableKey();MOZ_ASSERT(!key.is<JSString*>());boolneedsEdge=true;if(key.is<JSObject*>()){TenuredCell&other=key.as<JSObject*>()->asTenured();needsEdge=!other.isMarked(BLACK)||other.isMarked(GRAY);}key.applyToWrapped(AddOutgoingEdgeFunctor(needsEdge,finder));}}voidZone::findOutgoingEdges(ZoneComponentFinder&finder){/* * Any compartment may have a pointer to an atom in the atoms * compartment, and these aren't in the cross compartment map. */JSRuntime*rt=runtimeFromActiveCooperatingThread();Zone*atomsZone=rt->atomsCompartment(finder.lock)->zone();if(atomsZone->isGCMarking())finder.addEdgeTo(atomsZone);for(CompartmentsInZoneItercomp(this);!comp.done();comp.next())comp->findOutgoingEdges(finder);for(ZoneSet::Ranger=gcSweepGroupEdges().all();!r.empty();r.popFront()){if(r.front()->isGCMarking())finder.addEdgeTo(r.front());}Debugger::findZoneEdges(this,finder);}boolGCRuntime::findInterZoneEdges(){/* * Weakmaps which have keys with delegates in a different zone introduce the * need for zone edges from the delegate's zone to the weakmap zone. * * Since the edges point into and not away from the zone the weakmap is in * we must find these edges in advance and store them in a set on the Zone. * If we run out of memory, we fall back to sweeping everything in one * group. */for(GCZonesIterzone(rt);!zone.done();zone.next()){if(!WeakMapBase::findInterZoneEdges(zone))returnfalse;}returntrue;}voidGCRuntime::groupZonesForSweeping(JS::gcreason::Reasonreason,AutoLockForExclusiveAccess&lock){#ifdef DEBUGfor(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next())MOZ_ASSERT(zone->gcSweepGroupEdges().empty());#endifJSContext*cx=TlsContext.get();ZoneComponentFinderfinder(cx->nativeStackLimit[JS::StackForSystemCode],lock);if(!isIncremental||!findInterZoneEdges())finder.useOneComponent();#ifdef JS_GC_ZEAL// Use one component for IncrementalSweepThenFinish zeal mode.if(isIncremental&&reason==JS::gcreason::DEBUG_GC&&hasZealMode(ZealMode::IncrementalSweepThenFinish)){finder.useOneComponent();}#endiffor(GCZonesIterzone(rt);!zone.done();zone.next()){MOZ_ASSERT(zone->isGCMarking());finder.addNode(zone);}sweepGroups=finder.getResultsList();currentSweepGroup=sweepGroups;sweepGroupIndex=0;for(GCZonesIterzone(rt);!zone.done();zone.next())zone->gcSweepGroupEdges().clear();#ifdef DEBUGfor(Zone*head=currentSweepGroup;head;head=head->nextGroup()){for(Zone*zone=head;zone;zone=zone->nextNodeInGroup())MOZ_ASSERT(zone->isGCMarking());}MOZ_ASSERT_IF(!isIncremental,!currentSweepGroup->nextGroup());for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next())MOZ_ASSERT(zone->gcSweepGroupEdges().empty());#endif}staticvoidResetGrayList(JSCompartment*comp);voidGCRuntime::getNextSweepGroup(){currentSweepGroup=currentSweepGroup->nextGroup();++sweepGroupIndex;if(!currentSweepGroup){abortSweepAfterCurrentGroup=false;return;}for(Zone*zone=currentSweepGroup;zone;zone=zone->nextNodeInGroup()){MOZ_ASSERT(zone->isGCMarking());MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());}if(!isIncremental)ZoneComponentFinder::mergeGroups(currentSweepGroup);if(abortSweepAfterCurrentGroup){MOZ_ASSERT(!isIncremental);for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){MOZ_ASSERT(!zone->gcNextGraphComponent);MOZ_ASSERT(zone->isGCMarking());zone->setNeedsIncrementalBarrier(false);zone->setGCState(Zone::NoGC);zone->gcGrayRoots().clearAndFree();}for(GCCompartmentGroupItercomp(rt);!comp.done();comp.next())ResetGrayList(comp);abortSweepAfterCurrentGroup=false;currentSweepGroup=nullptr;}}/* * Gray marking: * * At the end of collection, anything reachable from a gray root that has not * otherwise been marked black must be marked gray. * * This means that when marking things gray we must not allow marking to leave * the current compartment group, as that could result in things being marked * grey when they might subsequently be marked black. To achieve this, when we * find a cross compartment pointer we don't mark the referent but add it to a * singly-linked list of incoming gray pointers that is stored with each * compartment. * * The list head is stored in JSCompartment::gcIncomingGrayPointers and contains * cross compartment wrapper objects. The next pointer is stored in the second * extra slot of the cross compartment wrapper. * * The list is created during gray marking when one of the * MarkCrossCompartmentXXX functions is called for a pointer that leaves the * current compartent group. This calls DelayCrossCompartmentGrayMarking to * push the referring object onto the list. * * The list is traversed and then unlinked in * MarkIncomingCrossCompartmentPointers. */staticboolIsGrayListObject(JSObject*obj){MOZ_ASSERT(obj);returnobj->is<CrossCompartmentWrapperObject>()&&!IsDeadProxyObject(obj);}/* static */unsignedProxyObject::grayLinkReservedSlot(JSObject*obj){MOZ_ASSERT(IsGrayListObject(obj));returnCrossCompartmentWrapperObject::GrayLinkReservedSlot;}#ifdef DEBUGstaticvoidAssertNotOnGrayList(JSObject*obj){MOZ_ASSERT_IF(IsGrayListObject(obj),GetProxyReservedSlot(obj,ProxyObject::grayLinkReservedSlot(obj)).isUndefined());}#endifstaticvoidAssertNoWrappersInGrayList(JSRuntime*rt){#ifdef DEBUGfor(CompartmentsIterc(rt,SkipAtoms);!c.done();c.next()){MOZ_ASSERT(!c->gcIncomingGrayPointers);for(JSCompartment::NonStringWrapperEnume(c);!e.empty();e.popFront())AssertNotOnGrayList(&e.front().value().unbarrieredGet().toObject());}#endif}staticJSObject*CrossCompartmentPointerReferent(JSObject*obj){MOZ_ASSERT(IsGrayListObject(obj));return&obj->as<ProxyObject>().private_().toObject();}staticJSObject*NextIncomingCrossCompartmentPointer(JSObject*prev,boolunlink){unsignedslot=ProxyObject::grayLinkReservedSlot(prev);JSObject*next=GetProxyReservedSlot(prev,slot).toObjectOrNull();MOZ_ASSERT_IF(next,IsGrayListObject(next));if(unlink)SetProxyReservedSlot(prev,slot,UndefinedValue());returnnext;}voidjs::DelayCrossCompartmentGrayMarking(JSObject*src){MOZ_ASSERT(IsGrayListObject(src));/* Called from MarkCrossCompartmentXXX functions. */unsignedslot=ProxyObject::grayLinkReservedSlot(src);JSObject*dest=CrossCompartmentPointerReferent(src);JSCompartment*comp=dest->compartment();if(GetProxyReservedSlot(src,slot).isUndefined()){SetProxyReservedSlot(src,slot,ObjectOrNullValue(comp->gcIncomingGrayPointers));comp->gcIncomingGrayPointers=src;}else{MOZ_ASSERT(GetProxyReservedSlot(src,slot).isObjectOrNull());}#ifdef DEBUG/* * Assert that the object is in our list, also walking the list to check its * integrity. */JSObject*obj=comp->gcIncomingGrayPointers;boolfound=false;while(obj){if(obj==src)found=true;obj=NextIncomingCrossCompartmentPointer(obj,false);}MOZ_ASSERT(found);#endif}staticvoidMarkIncomingCrossCompartmentPointers(JSRuntime*rt,constuint32_tcolor){MOZ_ASSERT(color==BLACK||color==GRAY);staticconstgcstats::PhaseKindstatsPhases[]={gcstats::PhaseKind::SWEEP_MARK_INCOMING_BLACK,gcstats::PhaseKind::SWEEP_MARK_INCOMING_GRAY};gcstats::AutoPhaseap1(rt->gc.stats(),statsPhases[color]);boolunlinkList=color==GRAY;for(GCCompartmentGroupIterc(rt);!c.done();c.next()){MOZ_ASSERT_IF(color==GRAY,c->zone()->isGCMarkingGray());MOZ_ASSERT_IF(color==BLACK,c->zone()->isGCMarkingBlack());MOZ_ASSERT_IF(c->gcIncomingGrayPointers,IsGrayListObject(c->gcIncomingGrayPointers));for(JSObject*src=c->gcIncomingGrayPointers;src;src=NextIncomingCrossCompartmentPointer(src,unlinkList)){JSObject*dst=CrossCompartmentPointerReferent(src);MOZ_ASSERT(dst->compartment()==c);if(color==GRAY){if(IsMarkedUnbarriered(rt,&src)&&src->asTenured().isMarked(GRAY))TraceManuallyBarrieredEdge(&rt->gc.marker,&dst,"cross-compartment gray pointer");}else{if(IsMarkedUnbarriered(rt,&src)&&!src->asTenured().isMarked(GRAY))TraceManuallyBarrieredEdge(&rt->gc.marker,&dst,"cross-compartment black pointer");}}if(unlinkList)c->gcIncomingGrayPointers=nullptr;}autounlimited=SliceBudget::unlimited();MOZ_RELEASE_ASSERT(rt->gc.marker.drainMarkStack(unlimited));}staticboolRemoveFromGrayList(JSObject*wrapper){if(!IsGrayListObject(wrapper))returnfalse;unsignedslot=ProxyObject::grayLinkReservedSlot(wrapper);if(GetProxyReservedSlot(wrapper,slot).isUndefined())returnfalse;/* Not on our list. */JSObject*tail=GetProxyReservedSlot(wrapper,slot).toObjectOrNull();SetProxyReservedSlot(wrapper,slot,UndefinedValue());JSCompartment*comp=CrossCompartmentPointerReferent(wrapper)->compartment();JSObject*obj=comp->gcIncomingGrayPointers;if(obj==wrapper){comp->gcIncomingGrayPointers=tail;returntrue;}while(obj){unsignedslot=ProxyObject::grayLinkReservedSlot(obj);JSObject*next=GetProxyReservedSlot(obj,slot).toObjectOrNull();if(next==wrapper){SetProxyReservedSlot(obj,slot,ObjectOrNullValue(tail));returntrue;}obj=next;}MOZ_CRASH("object not found in gray link list");}staticvoidResetGrayList(JSCompartment*comp){JSObject*src=comp->gcIncomingGrayPointers;while(src)src=NextIncomingCrossCompartmentPointer(src,true);comp->gcIncomingGrayPointers=nullptr;}voidjs::NotifyGCNukeWrapper(JSObject*obj){/* * References to target of wrapper are being removed, we no longer have to * remember to mark it. */RemoveFromGrayList(obj);}enum{JS_GC_SWAP_OBJECT_A_REMOVED=1<<0,JS_GC_SWAP_OBJECT_B_REMOVED=1<<1};unsignedjs::NotifyGCPreSwap(JSObject*a,JSObject*b){/* * Two objects in the same compartment are about to have had their contents * swapped. If either of them are in our gray pointer list, then we remove * them from the lists, returning a bitset indicating what happened. */return(RemoveFromGrayList(a)?JS_GC_SWAP_OBJECT_A_REMOVED:0)|(RemoveFromGrayList(b)?JS_GC_SWAP_OBJECT_B_REMOVED:0);}voidjs::NotifyGCPostSwap(JSObject*a,JSObject*b,unsignedremovedFlags){/* * Two objects in the same compartment have had their contents swapped. If * either of them were in our gray pointer list, we re-add them again. */if(removedFlags&JS_GC_SWAP_OBJECT_A_REMOVED)DelayCrossCompartmentGrayMarking(b);if(removedFlags&JS_GC_SWAP_OBJECT_B_REMOVED)DelayCrossCompartmentGrayMarking(a);}voidGCRuntime::endMarkingSweepGroup(){gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP_MARK);/* * Mark any incoming black pointers from previously swept compartments * whose referents are not marked. This can occur when gray cells become * black by the action of UnmarkGray. */MarkIncomingCrossCompartmentPointers(rt,BLACK);markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_WEAK);/* * Change state of current group to MarkGray to restrict marking to this * group. Note that there may be pointers to the atoms compartment, and * these will be marked through, as they are not marked with * MarkCrossCompartmentXXX. */for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){MOZ_ASSERT(zone->isGCMarkingBlack());zone->setGCState(Zone::MarkGray);}marker.setMarkColorGray();/* Mark incoming gray pointers from previously swept compartments. */MarkIncomingCrossCompartmentPointers(rt,GRAY);/* Mark gray roots and mark transitively inside the current compartment group. */markGrayReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY);markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);/* Restore marking state. */for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){MOZ_ASSERT(zone->isGCMarkingGray());zone->setGCState(Zone::Mark);}MOZ_ASSERT(marker.isDrained());marker.setMarkColorBlack();}// Causes the given WeakCache to be swept when run.classSweepWeakCacheTask:publicGCParallelTask{JS::detail::WeakCacheBase&cache;SweepWeakCacheTask(constSweepWeakCacheTask&)=delete;public:SweepWeakCacheTask(JSRuntime*rt,JS::detail::WeakCacheBase&wc):GCParallelTask(rt),cache(wc){}SweepWeakCacheTask(SweepWeakCacheTask&&other):GCParallelTask(mozilla::Move(other)),cache(other.cache){}voidrun()override{cache.sweep();}};staticvoidUpdateAtomsBitmap(JSRuntime*runtime){DenseBitmapmarked;if(runtime->gc.atomMarking.computeBitmapFromChunkMarkBits(runtime,marked)){for(GCZonesIterzone(runtime);!zone.done();zone.next())runtime->gc.atomMarking.updateZoneBitmap(zone,marked);}else{// Ignore OOM in computeBitmapFromChunkMarkBits. The updateZoneBitmap// call can only remove atoms from the zone bitmap, so it is// conservative to just not call it.}runtime->gc.atomMarking.updateChunkMarkBits(runtime);// For convenience sweep these tables non-incrementally as part of bitmap// sweeping; they are likely to be much smaller than the main atoms table.runtime->unsafeSymbolRegistry().sweep();for(CompartmentsItercomp(runtime,SkipAtoms);!comp.done();comp.next())comp->sweepVarNames();}staticvoidSweepCCWrappers(JSRuntime*runtime){for(GCCompartmentGroupIterc(runtime);!c.done();c.next())c->sweepCrossCompartmentWrappers();}staticvoidSweepObjectGroups(JSRuntime*runtime){for(GCCompartmentGroupIterc(runtime);!c.done();c.next())c->objectGroups.sweep(runtime->defaultFreeOp());}staticvoidSweepRegExps(JSRuntime*runtime){for(GCCompartmentGroupIterc(runtime);!c.done();c.next())c->sweepRegExps();}staticvoidSweepMisc(JSRuntime*runtime){for(GCCompartmentGroupIterc(runtime);!c.done();c.next()){c->sweepGlobalObject();c->sweepTemplateObjects();c->sweepSavedStacks();c->sweepTemplateLiteralMap();c->sweepSelfHostingScriptSource();c->sweepNativeIterators();c->sweepWatchpoints();}}staticvoidSweepCompressionTasks(JSRuntime*runtime){AutoLockHelperThreadStatelock;// Attach finished compression tasks.auto&finished=HelperThreadState().compressionFinishedList(lock);for(size_ti=0;i<finished.length();i++){if(finished[i]->runtimeMatches(runtime)){UniquePtr<SourceCompressionTask>task(Move(finished[i]));HelperThreadState().remove(finished,&i);task->complete();}}// Sweep pending tasks that are holding onto should-be-dead ScriptSources.auto&pending=HelperThreadState().compressionPendingList(lock);for(size_ti=0;i<pending.length();i++){if(pending[i]->shouldCancel())HelperThreadState().remove(pending,&i);}}staticvoidSweepWeakMaps(JSRuntime*runtime){for(GCSweepGroupIterzone(runtime);!zone.done();zone.next()){/* Clear all weakrefs that point to unmarked things. */for(autoedge:zone->gcWeakRefs()){/* Edges may be present multiple times, so may already be nulled. */if(*edge&&IsAboutToBeFinalizedDuringSweep(**edge))*edge=nullptr;}zone->gcWeakRefs().clear();/* No need to look up any more weakmap keys from this sweep group. */AutoEnterOOMUnsafeRegionoomUnsafe;if(!zone->gcWeakKeys().clear())oomUnsafe.crash("clearing weak keys in beginSweepingSweepGroup()");zone->sweepWeakMaps();}}staticvoidSweepUniqueIds(JSRuntime*runtime){FreeOpfop(nullptr);for(GCSweepGroupIterzone(runtime);!zone.done();zone.next())zone->sweepUniqueIds(&fop);}voidGCRuntime::startTask(GCParallelTask&task,gcstats::PhaseKindphase,AutoLockHelperThreadState&locked){if(!task.startWithLockHeld(locked)){AutoUnlockHelperThreadStateunlock(locked);gcstats::AutoPhaseap(stats(),phase);task.runFromActiveCooperatingThread(rt);}}voidGCRuntime::joinTask(GCParallelTask&task,gcstats::PhaseKindphase,AutoLockHelperThreadState&locked){{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::JOIN_PARALLEL_TASKS);task.joinWithLockHeld(locked);}stats().recordParallelPhase(phase,task.duration());}voidGCRuntime::sweepDebuggerOnMainThread(FreeOp*fop){// Detach unreachable debuggers and global objects from each other.// This can modify weakmaps and so must happen before weakmap sweeping.Debugger::sweepAll(fop);gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP_COMPARTMENTS);// Sweep debug environment information. This performs lookups in the Zone's// unique IDs table and so must not happen in parallel with sweeping that// table.{gcstats::AutoPhaseap2(stats(),gcstats::PhaseKind::SWEEP_MISC);for(GCCompartmentGroupIterc(rt);!c.done();c.next())c->sweepDebugEnvironments();}// Sweep breakpoints. This is done here to be with the other debug sweeping,// although note that it can cause JIT code to be patched.{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP_BREAKPOINT);for(GCSweepGroupIterzone(rt);!zone.done();zone.next())zone->sweepBreakpoints(fop);}}voidGCRuntime::sweepJitDataOnMainThread(FreeOp*fop){{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP_JIT_DATA);// Cancel any active or pending off thread compilations.js::CancelOffThreadIonCompile(rt,JS::Zone::Sweep);for(GCCompartmentGroupIterc(rt);!c.done();c.next())c->sweepJitCompartment(fop);for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){if(jit::JitZone*jitZone=zone->jitZone())jitZone->sweep(fop);}// Bug 1071218: the following method has not yet been refactored to// work on a single zone-group at once.// Sweep entries containing about-to-be-finalized JitCode and// update relocated TypeSet::Types inside the JitcodeGlobalTable.jit::JitRuntime::SweepJitcodeGlobalTable(rt);}{gcstats::AutoPhaseapdc(stats(),gcstats::PhaseKind::SWEEP_DISCARD_CODE);for(GCSweepGroupIterzone(rt);!zone.done();zone.next())zone->discardJitCode(fop);}{gcstats::AutoPhaseap1(stats(),gcstats::PhaseKind::SWEEP_TYPES);gcstats::AutoPhaseap2(stats(),gcstats::PhaseKind::SWEEP_TYPES_BEGIN);for(GCSweepGroupIterzone(rt);!zone.done();zone.next())zone->beginSweepTypes(fop,releaseObservedTypes&&!zone->isPreservingCode());}}usingWeakCacheTaskVector=mozilla::Vector<SweepWeakCacheTask,0,SystemAllocPolicy>;template<typenameFunctor>staticinlineboolIterateWeakCaches(JSRuntime*rt,Functorf){for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){for(JS::detail::WeakCacheBase*cache:zone->weakCaches()){if(!f(cache))returnfalse;}}for(JS::detail::WeakCacheBase*cache:rt->weakCaches()){if(!f(cache))returnfalse;}returntrue;}staticWeakCacheTaskVectorPrepareWeakCacheTasks(JSRuntime*rt){// Build a vector of sweep tasks to run on a helper thread.WeakCacheTaskVectortasks;boolok=IterateWeakCaches(rt,[&](JS::detail::WeakCacheBase*cache){if(!cache->needsSweep())returntrue;returntasks.emplaceBack(rt,*cache);});// If we ran out of memory, do all the work now and ensure we return an// empty list.if(!ok){IterateWeakCaches(rt,[&](JS::detail::WeakCacheBase*cache){SweepWeakCacheTask(rt,*cache).runFromActiveCooperatingThread(rt);returntrue;});tasks.clear();}returntasks;}voidGCRuntime::beginSweepingSweepGroup(){/* * Begin sweeping the group of zones in currentSweepGroup, performing * actions that must be done before yielding to caller. */usingnamespacegcstats;AutoSCCscc(stats(),sweepGroupIndex);boolsweepingAtoms=false;for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){/* Set the GC state to sweeping. */MOZ_ASSERT(zone->isGCMarking());zone->setGCState(Zone::Sweep);/* Purge the ArenaLists before sweeping. */zone->arenas.purge();if(zone->isAtomsZone())sweepingAtoms=true;#ifdef DEBUGzone->gcLastSweepGroupIndex=sweepGroupIndex;#endif}validateIncrementalMarking();FreeOpfop(rt);{AutoPhaseap(stats(),PhaseKind::FINALIZE_START);callFinalizeCallbacks(&fop,JSFINALIZE_GROUP_PREPARE);{AutoPhaseap2(stats(),PhaseKind::WEAK_ZONES_CALLBACK);callWeakPointerZonesCallbacks();}{AutoPhaseap2(stats(),PhaseKind::WEAK_COMPARTMENT_CALLBACK);for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){for(CompartmentsInZoneItercomp(zone);!comp.done();comp.next())callWeakPointerCompartmentCallbacks(comp);}}callFinalizeCallbacks(&fop,JSFINALIZE_GROUP_START);}sweepDebuggerOnMainThread(&fop);{AutoLockHelperThreadStatelock;Maybe<AutoRunParallelTask>updateAtomsBitmap;if(sweepingAtoms)updateAtomsBitmap.emplace(rt,UpdateAtomsBitmap,PhaseKind::UPDATE_ATOMS_BITMAP,lock);AutoPhaseap(stats(),PhaseKind::SWEEP_COMPARTMENTS);AutoRunParallelTasksweepCCWrappers(rt,SweepCCWrappers,PhaseKind::SWEEP_CC_WRAPPER,lock);AutoRunParallelTasksweepObjectGroups(rt,SweepObjectGroups,PhaseKind::SWEEP_TYPE_OBJECT,lock);AutoRunParallelTasksweepRegExps(rt,SweepRegExps,PhaseKind::SWEEP_REGEXP,lock);AutoRunParallelTasksweepMisc(rt,SweepMisc,PhaseKind::SWEEP_MISC,lock);AutoRunParallelTasksweepCompTasks(rt,SweepCompressionTasks,PhaseKind::SWEEP_COMPRESSION,lock);AutoRunParallelTasksweepWeakMaps(rt,SweepWeakMaps,PhaseKind::SWEEP_WEAKMAPS,lock);AutoRunParallelTasksweepUniqueIds(rt,SweepUniqueIds,PhaseKind::SWEEP_UNIQUEIDS,lock);WeakCacheTaskVectorsweepCacheTasks=PrepareWeakCacheTasks(rt);for(auto&task:sweepCacheTasks)startTask(task,PhaseKind::SWEEP_WEAK_CACHES,lock);{AutoUnlockHelperThreadStateunlock(lock);sweepJitDataOnMainThread(&fop);}for(auto&task:sweepCacheTasks)joinTask(task,PhaseKind::SWEEP_WEAK_CACHES,lock);}if(sweepingAtoms)startSweepingAtomsTable();// Queue all GC things in all zones for sweeping, either on the foreground// or on the background thread.for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){zone->arenas.queueForForegroundSweep(&fop,ForegroundObjectFinalizePhase);for(unsignedi=0;i<ArrayLength(IncrementalFinalizePhases);++i)zone->arenas.queueForForegroundSweep(&fop,IncrementalFinalizePhases[i]);for(unsignedi=0;i<ArrayLength(BackgroundFinalizePhases);++i)zone->arenas.queueForBackgroundSweep(&fop,BackgroundFinalizePhases[i]);zone->arenas.queueForegroundThingsForSweep(&fop);}sweepActionList=PerSweepGroupActionList;sweepActionIndex=0;sweepPhaseIndex=0;sweepZone=currentSweepGroup;}voidGCRuntime::endSweepingSweepGroup(){{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::FINALIZE_END);FreeOpfop(rt);callFinalizeCallbacks(&fop,JSFINALIZE_GROUP_END);}/* Update the GC state for zones we have swept. */for(GCSweepGroupIterzone(rt);!zone.done();zone.next()){MOZ_ASSERT(zone->isGCSweeping());AutoLockGClock(rt);zone->setGCState(Zone::Finished);zone->threshold.updateAfterGC(zone->usage.gcBytes(),invocationKind,tunables,schedulingState,lock);}/* Start background thread to sweep zones if required. */ZoneListzones;for(GCSweepGroupIterzone(rt);!zone.done();zone.next())zones.append(zone);if(sweepOnBackgroundThread)queueZonesForBackgroundSweep(zones);elsesweepBackgroundThings(zones,blocksToFreeAfterSweeping.ref());/* Reset the list of arenas marked as being allocated during sweep phase. */while(Arena*arena=arenasAllocatedDuringSweep){arenasAllocatedDuringSweep=arena->getNextAllocDuringSweep();arena->unsetAllocDuringSweep();}}voidGCRuntime::beginSweepPhase(JS::gcreason::Reasonreason,AutoLockForExclusiveAccess&lock){/* * Sweep phase. * * Finalize as we sweep, outside of lock but with CurrentThreadIsHeapBusy() * true so that any attempt to allocate a GC-thing from a finalizer will * fail, rather than nest badly and leave the unmarked newborn to be swept. */MOZ_ASSERT(!abortSweepAfterCurrentGroup);AutoSetThreadIsSweepingthreadIsSweeping;releaseHeldRelocatedArenas();computeNonIncrementalMarkingForValidation(lock);gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP);sweepOnBackgroundThread=reason!=JS::gcreason::DESTROY_RUNTIME&&!TraceEnabled()&&CanUseExtraThreads();releaseObservedTypes=shouldReleaseObservedTypes();AssertNoWrappersInGrayList(rt);DropStringWrappers(rt);groupZonesForSweeping(reason,lock);endMarkingSweepGroup();beginSweepingSweepGroup();}boolArenaLists::foregroundFinalize(FreeOp*fop,AllocKindthingKind,SliceBudget&sliceBudget,SortedArenaList&sweepList){MOZ_ASSERT_IF(IsObjectAllocKind(thingKind),savedObjectArenas(thingKind).isEmpty());if(!arenaListsToSweep(thingKind)&&incrementalSweptArenas.ref().isEmpty())returntrue;KeepArenasEnumkeepArenas=IsObjectAllocKind(thingKind)?KEEP_ARENAS:RELEASE_ARENAS;if(!FinalizeArenas(fop,&arenaListsToSweep(thingKind),sweepList,thingKind,sliceBudget,keepArenas)){incrementalSweptArenaKind=thingKind;incrementalSweptArenas=sweepList.toArenaList();returnfalse;}// Clear any previous incremental sweep state we may have saved.incrementalSweptArenas.ref().clear();if(IsObjectAllocKind(thingKind)){// Delay releasing of object arenas until types have been swept.sweepList.extractEmpty(&savedEmptyObjectArenas.ref());savedObjectArenas(thingKind)=sweepList.toArenaList();}else{// Join |arenaLists[thingKind]| and |sweepList| into a single list.ArenaListfinalized=sweepList.toArenaList();arenaLists(thingKind)=finalized.insertListWithCursorAtEnd(arenaLists(thingKind));}returntrue;}IncrementalProgressGCRuntime::drainMarkStack(SliceBudget&sliceBudget,gcstats::PhaseKindphase){/* Run a marking slice and return whether the stack is now empty. */gcstats::AutoPhaseap(stats(),phase);returnmarker.drainMarkStack(sliceBudget)?Finished:NotFinished;}staticvoidSweepThing(Shape*shape){if(!shape->isMarked())shape->sweep();}staticvoidSweepThing(JSScript*script,AutoClearTypeInferenceStateOnOOM*oom){script->maybeSweepTypes(oom);}staticvoidSweepThing(ObjectGroup*group,AutoClearTypeInferenceStateOnOOM*oom){group->maybeSweep(oom);}template<typenameT,typename...Args>staticboolSweepArenaList(Arena**arenasToSweep,SliceBudget&sliceBudget,Args...args){while(Arena*arena=*arenasToSweep){for(ArenaCellIterUnderGCi(arena);!i.done();i.next())SweepThing(i.get<T>(),args...);*arenasToSweep=(*arenasToSweep)->next;AllocKindkind=MapTypeToFinalizeKind<T>::kind;sliceBudget.step(Arena::thingsPerArena(kind));if(sliceBudget.isOverBudget())returnfalse;}returntrue;}/* static */IncrementalProgressGCRuntime::sweepTypeInformation(GCRuntime*gc,FreeOp*fop,Zone*zone,SliceBudget&budget,AllocKindkind){// Sweep dead type information stored in scripts and object groups, but// don't finalize them yet. We have to sweep dead information from both live// and dead scripts and object groups, so that no dead references remain in// them. Type inference can end up crawling these zones again, such as for// TypeCompartment::markSetsUnknown, and if this happens after sweeping for// the sweep group finishes we won't be able to determine which things in// the zone are live.MOZ_ASSERT(kind==AllocKind::LIMIT);gcstats::AutoPhaseap1(gc->stats(),gcstats::PhaseKind::SWEEP_COMPARTMENTS);gcstats::AutoPhaseap2(gc->stats(),gcstats::PhaseKind::SWEEP_TYPES);ArenaLists&al=zone->arenas;AutoClearTypeInferenceStateOnOOMoom(zone);if(!SweepArenaList<JSScript>(&al.gcScriptArenasToUpdate.ref(),budget,&oom))returnNotFinished;if(!SweepArenaList<ObjectGroup>(&al.gcObjectGroupArenasToUpdate.ref(),budget,&oom))returnNotFinished;// Finish sweeping type information in the zone.{gcstats::AutoPhaseap(gc->stats(),gcstats::PhaseKind::SWEEP_TYPES_END);zone->types.endSweep(gc->rt);}returnFinished;}/* static */IncrementalProgressGCRuntime::mergeSweptObjectArenas(GCRuntime*gc,FreeOp*fop,Zone*zone,SliceBudget&budget,AllocKindkind){// Foreground finalized objects have already been finalized, and now their// arenas can be reclaimed by freeing empty ones and making non-empty ones// available for allocation.MOZ_ASSERT(kind==AllocKind::LIMIT);zone->arenas.mergeForegroundSweptObjectArenas();returnFinished;}voidGCRuntime::startSweepingAtomsTable(){auto&maybeAtoms=maybeAtomsToSweep.ref();MOZ_ASSERT(maybeAtoms.isNothing());AtomSet*atomsTable=rt->atomsForSweeping();if(!atomsTable)return;// Create a secondary table to hold new atoms added while we're sweeping// the main table incrementally.if(!rt->createAtomsAddedWhileSweepingTable()){atomsTable->sweep();return;}// Initialize remaining atoms to sweep.maybeAtoms.emplace(*atomsTable);}/* static */IncrementalProgressGCRuntime::sweepAtomsTable(GCRuntime*gc,SliceBudget&budget){if(!gc->atomsZone->isGCSweeping())returnFinished;returngc->sweepAtomsTable(budget);}IncrementalProgressGCRuntime::sweepAtomsTable(SliceBudget&budget){gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP_ATOMS_TABLE);auto&maybeAtoms=maybeAtomsToSweep.ref();if(!maybeAtoms)returnFinished;MOZ_ASSERT(rt->atomsAddedWhileSweeping());// Sweep the table incrementally until we run out of work or budget.auto&atomsToSweep=*maybeAtoms;while(!atomsToSweep.empty()){budget.step();if(budget.isOverBudget())returnNotFinished;JSAtom*atom=atomsToSweep.front().asPtrUnbarriered();if(IsAboutToBeFinalizedUnbarriered(&atom))atomsToSweep.removeFront();atomsToSweep.popFront();}// Add any new atoms from the secondary table.AutoEnterOOMUnsafeRegionoomUnsafe;AtomSet*atomsTable=rt->atomsForSweeping();MOZ_ASSERT(atomsTable);for(autor=rt->atomsAddedWhileSweeping()->all();!r.empty();r.popFront()){if(!atomsTable->putNew(AtomHasher::Lookup(r.front().asPtrUnbarriered()),r.front()))oomUnsafe.crash("Adding atom from secondary table after sweep");}rt->destroyAtomsAddedWhileSweepingTable();maybeAtoms.reset();returnFinished;}/* static */IncrementalProgressGCRuntime::finalizeAllocKind(GCRuntime*gc,FreeOp*fop,Zone*zone,SliceBudget&budget,AllocKindkind){// Set the number of things per arena for this AllocKind.size_tthingsPerArena=Arena::thingsPerArena(kind);auto&sweepList=gc->incrementalSweepList.ref();sweepList.setThingsPerArena(thingsPerArena);if(!zone->arenas.foregroundFinalize(fop,kind,budget,sweepList))returnNotFinished;// Reset the slots of the sweep list that we used.sweepList.reset(thingsPerArena);returnFinished;}/* static */IncrementalProgressGCRuntime::sweepShapeTree(GCRuntime*gc,FreeOp*fop,Zone*zone,SliceBudget&budget,AllocKindkind){// Remove dead shapes from the shape tree, but don't finalize them yet.MOZ_ASSERT(kind==AllocKind::LIMIT);gcstats::AutoPhaseap(gc->stats(),gcstats::PhaseKind::SWEEP_SHAPE);ArenaLists&al=zone->arenas;if(!SweepArenaList<Shape>(&al.gcShapeArenasToUpdate.ref(),budget))returnNotFinished;if(!SweepArenaList<AccessorShape>(&al.gcAccessorShapeArenasToUpdate.ref(),budget))returnNotFinished;returnFinished;}staticvoidAddPerSweepGroupSweepAction(bool*ok,PerSweepGroupSweepActionaction){if(*ok)*ok=PerSweepGroupSweepActions.emplaceBack(action);}staticvoidAddPerZoneSweepPhase(bool*ok){if(*ok)*ok=PerZoneSweepPhases.emplaceBack();}staticvoidAddPerZoneSweepAction(bool*ok,PerZoneSweepAction::Funcfunc,AllocKindkind=AllocKind::LIMIT){if(*ok)*ok=PerZoneSweepPhases.back().emplaceBack(func,kind);}/* static */boolGCRuntime::initializeSweepActions(){boolok=true;AddPerSweepGroupSweepAction(&ok,GCRuntime::sweepAtomsTable);AddPerZoneSweepPhase(&ok);for(autokind:ForegroundObjectFinalizePhase.kinds)AddPerZoneSweepAction(&ok,GCRuntime::finalizeAllocKind,kind);AddPerZoneSweepPhase(&ok);AddPerZoneSweepAction(&ok,GCRuntime::sweepTypeInformation);AddPerZoneSweepAction(&ok,GCRuntime::mergeSweptObjectArenas);for(constauto&finalizePhase:IncrementalFinalizePhases){AddPerZoneSweepPhase(&ok);for(autokind:finalizePhase.kinds)AddPerZoneSweepAction(&ok,GCRuntime::finalizeAllocKind,kind);}AddPerZoneSweepPhase(&ok);AddPerZoneSweepAction(&ok,GCRuntime::sweepShapeTree);returnok;}staticinlineSweepActionListNextSweepActionList(SweepActionListlist){MOZ_ASSERT(list<SweepActionListCount);returnSweepActionList(unsigned(list)+1);}IncrementalProgressGCRuntime::performSweepActions(SliceBudget&budget,AutoLockForExclusiveAccess&lock){AutoSetThreadIsSweepingthreadIsSweeping;gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP);FreeOpfop(rt);if(drainMarkStack(budget,gcstats::PhaseKind::SWEEP_MARK)==NotFinished)returnNotFinished;for(;;){for(;sweepActionList<SweepActionListCount;sweepActionList=NextSweepActionList(sweepActionList)){switch(sweepActionList){casePerSweepGroupActionList:{constauto&actions=PerSweepGroupSweepActions;for(;sweepActionIndex<actions.length();sweepActionIndex++){autoaction=actions[sweepActionIndex];if(action(this,budget)==NotFinished)returnNotFinished;}sweepActionIndex=0;break;}casePerZoneActionList:for(;sweepPhaseIndex<PerZoneSweepPhases.length();sweepPhaseIndex++){constauto&actions=PerZoneSweepPhases[sweepPhaseIndex];for(;sweepZone;sweepZone=sweepZone->nextNodeInGroup()){for(;sweepActionIndex<actions.length();sweepActionIndex++){constauto&action=actions[sweepActionIndex];if(action.func(this,&fop,sweepZone,budget,action.kind)==NotFinished)returnNotFinished;}sweepActionIndex=0;}sweepZone=currentSweepGroup;}sweepPhaseIndex=0;break;default:MOZ_CRASH("Unexpected sweepActionList value");}}sweepActionList=PerSweepGroupActionList;endSweepingSweepGroup();getNextSweepGroup();if(!currentSweepGroup)returnFinished;endMarkingSweepGroup();beginSweepingSweepGroup();}}boolGCRuntime::allCCVisibleZonesWereCollected()const{// Calculate whether the gray marking state is now valid.//// The gray bits change from invalid to valid if we finished a full GC from// the point of view of the cycle collector. We ignore the following://// - Helper thread zones, as these are not reachable from the main heap.// - The atoms zone, since strings and symbols are never marked gray.// - Empty zones.//// These exceptions ensure that when the CC requests a full GC the gray mark// state ends up valid even it we don't collect all of the zones.if(isFull)returntrue;for(ZonesIterzone(rt,SkipAtoms);!zone.done();zone.next()){if(!zone->isCollecting()&&!zone->usedByHelperThread()&&!zone->arenas.arenaListsAreEmpty()){returnfalse;}}returntrue;}voidGCRuntime::endSweepPhase(booldestroyingRuntime,AutoLockForExclusiveAccess&lock){AutoSetThreadIsSweepingthreadIsSweeping;gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::SWEEP);FreeOpfop(rt);MOZ_ASSERT_IF(destroyingRuntime,!sweepOnBackgroundThread);/* * Recalculate whether GC was full or not as this may have changed due to * newly created zones. Can only change from full to not full. */if(isFull){for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){if(!zone->isCollecting()){isFull=false;break;}}}{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::DESTROY);/* * Sweep script filenames after sweeping functions in the generic loop * above. In this way when a scripted function's finalizer destroys the * script and calls rt->destroyScriptHook, the hook can still access the * script's filename. See bug 323267. */SweepScriptData(rt,lock);/* Clear out any small pools that we're hanging on to. */if(rt->hasJitRuntime()){rt->jitRuntime()->execAlloc().purge();rt->jitRuntime()->backedgeExecAlloc().purge();}}{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::FINALIZE_END);callFinalizeCallbacks(&fop,JSFINALIZE_COLLECTION_END);if(allCCVisibleZonesWereCollected())grayBitsValid=true;}finishMarkingValidation();#ifdef DEBUGfor(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){for(autoi:AllAllocKinds()){MOZ_ASSERT_IF(!IsBackgroundFinalized(i)||!sweepOnBackgroundThread,!zone->arenas.arenaListsToSweep(i));}}#endifAssertNoWrappersInGrayList(rt);}voidGCRuntime::beginCompactPhase(){MOZ_ASSERT(!isBackgroundSweeping());gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::COMPACT);MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());for(GCZonesIterzone(rt);!zone.done();zone.next()){if(CanRelocateZone(zone))zonesToMaybeCompact.ref().append(zone);}MOZ_ASSERT(!relocatedArenasToRelease);startedCompacting=true;}IncrementalProgressGCRuntime::compactPhase(JS::gcreason::Reasonreason,SliceBudget&sliceBudget,AutoLockForExclusiveAccess&lock){assertBackgroundSweepingFinished();MOZ_ASSERT(startedCompacting);gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::COMPACT);// TODO: JSScripts can move. If the sampler interrupts the GC in the// middle of relocating an arena, invalid JSScript pointers may be// accessed. Suppress all sampling until a finer-grained solution can be// found. See bug 1295775.AutoSuppressProfilerSamplingsuppressSampling(TlsContext.get());ZoneListrelocatedZones;Arena*relocatedArenas=nullptr;while(!zonesToMaybeCompact.ref().isEmpty()){Zone*zone=zonesToMaybeCompact.ref().front();zonesToMaybeCompact.ref().removeFront();MOZ_ASSERT(zone->group()->nursery().isEmpty());MOZ_ASSERT(zone->isGCFinished());zone->setGCState(Zone::Compact);if(relocateArenas(zone,reason,relocatedArenas,sliceBudget)){updateZonePointersToRelocatedCells(zone,lock);relocatedZones.append(zone);}else{zone->setGCState(Zone::Finished);}if(sliceBudget.isOverBudget())break;}if(!relocatedZones.isEmpty()){updateRuntimePointersToRelocatedCells(lock);do{Zone*zone=relocatedZones.front();relocatedZones.removeFront();zone->setGCState(Zone::Finished);}while(!relocatedZones.isEmpty());}if(ShouldProtectRelocatedArenas(reason))protectAndHoldArenas(relocatedArenas);elsereleaseRelocatedArenas(relocatedArenas);// Clear caches that can contain cell pointers.rt->caches().newObjectCache.purge();rt->caches().nativeIterCache.purge();if(rt->caches().evalCache.initialized())rt->caches().evalCache.clear();#ifdef DEBUGCheckHashTablesAfterMovingGC(rt);#endifreturnzonesToMaybeCompact.ref().isEmpty()?Finished:NotFinished;}voidGCRuntime::endCompactPhase(JS::gcreason::Reasonreason){startedCompacting=false;}voidGCRuntime::finishCollection(JS::gcreason::Reasonreason){assertBackgroundSweepingFinished();MOZ_ASSERT(marker.isDrained());marker.stop();clearBufferedGrayRoots();MemProfiler::SweepTenured(rt);uint64_tcurrentTime=PRMJ_Now();schedulingState.updateHighFrequencyMode(lastGCTime,currentTime,tunables);for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){if(zone->isCollecting()){MOZ_ASSERT(zone->isGCFinished());zone->setGCState(Zone::NoGC);}MOZ_ASSERT(!zone->isCollectingFromAnyThread());MOZ_ASSERT(!zone->wasGCStarted());}MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());lastGCTime=currentTime;}staticconstchar*HeapStateToLabel(JS::HeapStateheapState){switch(heapState){caseJS::HeapState::MinorCollecting:return"js::Nursery::collect";caseJS::HeapState::MajorCollecting:return"js::GCRuntime::collect";caseJS::HeapState::Tracing:return"JS_IterateCompartments";caseJS::HeapState::Idle:caseJS::HeapState::CycleCollecting:MOZ_CRASH("Should never have an Idle or CC heap state when pushing GC pseudo frames!");}MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!");returnnullptr;}#ifdef DEBUGstaticboolAllNurseriesAreEmpty(JSRuntime*rt){for(ZoneGroupsItergroup(rt);!group.done();group.next()){if(!group->nursery().isEmpty())returnfalse;}returntrue;}#endif/* Start a new heap session. */AutoTraceSession::AutoTraceSession(JSRuntime*rt,JS::HeapStateheapState):lock(rt),runtime(rt),prevState(TlsContext.get()->heapState),pseudoFrame(rt,HeapStateToLabel(heapState),ProfileEntry::Category::GC){MOZ_ASSERT(prevState==JS::HeapState::Idle);MOZ_ASSERT(heapState!=JS::HeapState::Idle);MOZ_ASSERT_IF(heapState==JS::HeapState::MajorCollecting,AllNurseriesAreEmpty(rt));TlsContext.get()->heapState=heapState;}AutoTraceSession::~AutoTraceSession(){MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());TlsContext.get()->heapState=prevState;}JS_PUBLIC_API(JS::HeapState)JS::CurrentThreadHeapState(){returnTlsContext.get()->heapState;}boolGCRuntime::canChangeActiveContext(JSContext*cx){// Threads cannot be in the middle of any operation that affects GC// behavior when execution transfers to another thread for cooperative// scheduling.returncx->heapState==JS::HeapState::Idle&&!cx->suppressGC&&!cx->inUnsafeRegion&&!cx->generationalDisabled&&!cx->compactingDisabledCount&&!cx->keepAtoms;}GCRuntime::IncrementalResultGCRuntime::resetIncrementalGC(gc::AbortReasonreason,AutoLockForExclusiveAccess&lock){MOZ_ASSERT(reason!=gc::AbortReason::None);switch(incrementalState){caseState::NotActive:returnIncrementalResult::Ok;caseState::MarkRoots:MOZ_CRASH("resetIncrementalGC did not expect MarkRoots state");break;caseState::Mark:{/* Cancel any ongoing marking. */marker.reset();marker.stop();clearBufferedGrayRoots();for(GCCompartmentsIterc(rt);!c.done();c.next())ResetGrayList(c);for(GCZonesIterzone(rt);!zone.done();zone.next()){MOZ_ASSERT(zone->isGCMarking());zone->setNeedsIncrementalBarrier(false);zone->setGCState(Zone::NoGC);}blocksToFreeAfterSweeping.ref().freeAll();incrementalState=State::NotActive;MOZ_ASSERT(!marker.shouldCheckCompartments());break;}caseState::Sweep:{marker.reset();for(CompartmentsIterc(rt,SkipAtoms);!c.done();c.next())c->scheduledForDestruction=false;/* Finish sweeping the current sweep group, then abort. */abortSweepAfterCurrentGroup=true;/* Don't perform any compaction after sweeping. */boolwasCompacting=isCompacting;isCompacting=false;autounlimited=SliceBudget::unlimited();incrementalCollectSlice(unlimited,JS::gcreason::RESET,lock);isCompacting=wasCompacting;{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);rt->gc.waitBackgroundSweepOrAllocEnd();}break;}caseState::Finalize:{{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);rt->gc.waitBackgroundSweepOrAllocEnd();}boolwasCompacting=isCompacting;isCompacting=false;autounlimited=SliceBudget::unlimited();incrementalCollectSlice(unlimited,JS::gcreason::RESET,lock);isCompacting=wasCompacting;break;}caseState::Compact:{boolwasCompacting=isCompacting;isCompacting=true;startedCompacting=true;zonesToMaybeCompact.ref().clear();autounlimited=SliceBudget::unlimited();incrementalCollectSlice(unlimited,JS::gcreason::RESET,lock);isCompacting=wasCompacting;break;}caseState::Decommit:{autounlimited=SliceBudget::unlimited();incrementalCollectSlice(unlimited,JS::gcreason::RESET,lock);break;}}stats().reset(reason);#ifdef DEBUGassertBackgroundSweepingFinished();for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){MOZ_ASSERT(!zone->isCollectingFromAnyThread());MOZ_ASSERT(!zone->needsIncrementalBarrier());MOZ_ASSERT(!zone->isOnList());}MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());MOZ_ASSERT(incrementalState==State::NotActive);#endifreturnIncrementalResult::Reset;}namespace{classAutoGCSlice{public:explicitAutoGCSlice(JSRuntime*rt);~AutoGCSlice();private:JSRuntime*runtime;AutoSetThreadIsPerformingGCperformingGC;};}/* anonymous namespace */AutoGCSlice::AutoGCSlice(JSRuntime*rt):runtime(rt){for(GCZonesIterzone(rt);!zone.done();zone.next()){/* * Clear needsIncrementalBarrier early so we don't do any write * barriers during GC. We don't need to update the Ion barriers (which * is expensive) because Ion code doesn't run during GC. If need be, * we'll update the Ion barriers in ~AutoGCSlice. */if(zone->isGCMarking()){MOZ_ASSERT(zone->needsIncrementalBarrier());zone->setNeedsIncrementalBarrier(false);}else{MOZ_ASSERT(!zone->needsIncrementalBarrier());}}}AutoGCSlice::~AutoGCSlice(){/* We can't use GCZonesIter if this is the end of the last slice. */for(ZonesIterzone(runtime,WithAtoms);!zone.done();zone.next()){if(zone->isGCMarking()){zone->setNeedsIncrementalBarrier(true);zone->arenas.purge();}else{zone->setNeedsIncrementalBarrier(false);}}}voidGCRuntime::pushZealSelectedObjects(){#ifdef JS_GC_ZEAL/* Push selected objects onto the mark stack and clear the list. */for(JSObject**obj=selectedForMarking.ref().begin();obj!=selectedForMarking.ref().end();obj++)TraceManuallyBarrieredEdge(&marker,obj,"selected obj");#endif}staticboolIsShutdownGC(JS::gcreason::Reasonreason){returnreason==JS::gcreason::SHUTDOWN_CC||reason==JS::gcreason::DESTROY_RUNTIME;}staticboolShouldCleanUpEverything(JS::gcreason::Reasonreason,JSGCInvocationKindgckind){// During shutdown, we must clean everything up, for the sake of leak// detection. When a runtime has no contexts, or we're doing a GC before a// shutdown CC, those are strong indications that we're shutting down.returnIsShutdownGC(reason)||gckind==GC_SHRINK;}voidGCRuntime::incrementalCollectSlice(SliceBudget&budget,JS::gcreason::Reasonreason,AutoLockForExclusiveAccess&lock){AutoGCSliceslice(rt);booldestroyingRuntime=(reason==JS::gcreason::DESTROY_RUNTIME);gc::StateinitialState=incrementalState;booluseZeal=false;#ifdef JS_GC_ZEALif(reason==JS::gcreason::DEBUG_GC&&!budget.isUnlimited()){/* * Do the incremental collection type specified by zeal mode if the * collection was triggered by runDebugGC() and incremental GC has not * been cancelled by resetIncrementalGC(). */useZeal=true;}#endifMOZ_ASSERT_IF(isIncrementalGCInProgress(),isIncremental);isIncremental=!budget.isUnlimited();if(useZeal&&(hasZealMode(ZealMode::IncrementalRootsThenFinish)||hasZealMode(ZealMode::IncrementalMarkAllThenFinish)||hasZealMode(ZealMode::IncrementalSweepThenFinish))){/* * Yields between slices occurs at predetermined points in these modes; * the budget is not used. */budget.makeUnlimited();}switch(incrementalState){caseState::NotActive:initialReason=reason;cleanUpEverything=ShouldCleanUpEverything(reason,invocationKind);isCompacting=shouldCompact();lastMarkSlice=false;incrementalState=State::MarkRoots;MOZ_FALLTHROUGH;caseState::MarkRoots:if(!beginMarkPhase(reason,lock)){incrementalState=State::NotActive;return;}if(!destroyingRuntime)pushZealSelectedObjects();incrementalState=State::Mark;if(isIncremental&&useZeal&&hasZealMode(ZealMode::IncrementalRootsThenFinish))break;MOZ_FALLTHROUGH;caseState::Mark:for(constCooperatingContext&target:rt->cooperatingContexts())AutoGCRooter::traceAllWrappers(target,&marker);/* If we needed delayed marking for gray roots, then collect until done. */if(!hasBufferedGrayRoots()){budget.makeUnlimited();isIncremental=false;}if(drainMarkStack(budget,gcstats::PhaseKind::MARK)==NotFinished)break;MOZ_ASSERT(marker.isDrained());/* * In incremental GCs where we have already performed more than once * slice we yield after marking with the aim of starting the sweep in * the next slice, since the first slice of sweeping can be expensive. * * This is modified by the various zeal modes. We don't yield in * IncrementalRootsThenFinish mode and we always yield in * IncrementalMarkAllThenFinish mode. * * We will need to mark anything new on the stack when we resume, so * we stay in Mark state. */if(!lastMarkSlice&&isIncremental&&((initialState==State::Mark&&!(useZeal&&hasZealMode(ZealMode::IncrementalRootsThenFinish)))||(useZeal&&hasZealMode(ZealMode::IncrementalMarkAllThenFinish)))){lastMarkSlice=true;break;}incrementalState=State::Sweep;/* * This runs to completion, but we don't continue if the budget is * now exhasted. */beginSweepPhase(reason,lock);if(budget.isOverBudget())break;/* * Always yield here when running in incremental multi-slice zeal * mode, so RunDebugGC can reset the slice buget. */if(isIncremental&&useZeal&&(hasZealMode(ZealMode::IncrementalMultipleSlices)||hasZealMode(ZealMode::IncrementalSweepThenFinish))){break;}MOZ_FALLTHROUGH;caseState::Sweep:if(performSweepActions(budget,lock)==NotFinished)break;endSweepPhase(destroyingRuntime,lock);incrementalState=State::Finalize;MOZ_FALLTHROUGH;caseState::Finalize:{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);// Yield until background finalization is done.if(!budget.isUnlimited()){// Poll for end of background sweepingAutoLockGClock(rt);if(isBackgroundSweeping())break;}else{waitBackgroundSweepEnd();}}{// Re-sweep the zones list, now that background finalization is// finished to actually remove and free dead zones.gcstats::AutoPhaseap1(stats(),gcstats::PhaseKind::SWEEP);gcstats::AutoPhaseap2(stats(),gcstats::PhaseKind::DESTROY);AutoSetThreadIsSweepingthreadIsSweeping;FreeOpfop(rt);sweepZoneGroups(&fop,destroyingRuntime);}MOZ_ASSERT(!startedCompacting);incrementalState=State::Compact;// Always yield before compacting since it is not incremental.if(isCompacting&&!budget.isUnlimited())break;MOZ_FALLTHROUGH;caseState::Compact:if(isCompacting){if(!startedCompacting)beginCompactPhase();if(compactPhase(reason,budget,lock)==NotFinished)break;endCompactPhase(reason);}startDecommit();incrementalState=State::Decommit;MOZ_FALLTHROUGH;caseState::Decommit:{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);// Yield until background decommit is done.if(!budget.isUnlimited()&&decommitTask.isRunning())break;decommitTask.join();}finishCollection(reason);incrementalState=State::NotActive;break;}}gc::AbortReasongc::IsIncrementalGCUnsafe(JSRuntime*rt){MOZ_ASSERT(!TlsContext.get()->suppressGC);if(!rt->gc.isIncrementalGCAllowed())returngc::AbortReason::IncrementalDisabled;returngc::AbortReason::None;}GCRuntime::IncrementalResultGCRuntime::budgetIncrementalGC(boolnonincrementalByAPI,JS::gcreason::Reasonreason,SliceBudget&budget,AutoLockForExclusiveAccess&lock){if(nonincrementalByAPI){stats().nonincremental(gc::AbortReason::NonIncrementalRequested);budget.makeUnlimited();// Reset any in progress incremental GC if this was triggered via the// API. This isn't required for correctness, but sometimes during tests// the caller expects this GC to collect certain objects, and we need// to make sure to collect everything possible.if(reason!=JS::gcreason::ALLOC_TRIGGER)returnresetIncrementalGC(gc::AbortReason::NonIncrementalRequested,lock);returnIncrementalResult::Ok;}if(reason==JS::gcreason::ABORT_GC){budget.makeUnlimited();stats().nonincremental(gc::AbortReason::AbortRequested);returnresetIncrementalGC(gc::AbortReason::AbortRequested,lock);}AbortReasonunsafeReason=IsIncrementalGCUnsafe(rt);if(unsafeReason==AbortReason::None){if(reason==JS::gcreason::COMPARTMENT_REVIVED)unsafeReason=gc::AbortReason::CompartmentRevived;elseif(mode!=JSGC_MODE_INCREMENTAL)unsafeReason=gc::AbortReason::ModeChange;}if(unsafeReason!=AbortReason::None){budget.makeUnlimited();stats().nonincremental(unsafeReason);returnresetIncrementalGC(unsafeReason,lock);}if(isTooMuchMalloc()){budget.makeUnlimited();stats().nonincremental(AbortReason::MallocBytesTrigger);}boolreset=false;for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){if(zone->usage.gcBytes()>=zone->threshold.gcTriggerBytes()){budget.makeUnlimited();stats().nonincremental(AbortReason::GCBytesTrigger);}if(isIncrementalGCInProgress()&&zone->isGCScheduled()!=zone->wasGCStarted())reset=true;if(zone->isTooMuchMalloc()){budget.makeUnlimited();stats().nonincremental(AbortReason::MallocBytesTrigger);}}if(reset)returnresetIncrementalGC(AbortReason::ZoneChange,lock);returnIncrementalResult::Ok;}namespace{classAutoScheduleZonesForGC{JSRuntime*rt_;public:explicitAutoScheduleZonesForGC(JSRuntime*rt):rt_(rt){for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){if(rt->gc.gcMode()==JSGC_MODE_GLOBAL)zone->scheduleGC();/* This is a heuristic to avoid resets. */if(rt->gc.isIncrementalGCInProgress()&&zone->needsIncrementalBarrier())zone->scheduleGC();/* This is a heuristic to reduce the total number of collections. */if(zone->usage.gcBytes()>=zone->threshold.allocTrigger(rt->gc.schedulingState.inHighFrequencyGCMode())){zone->scheduleGC();}}}~AutoScheduleZonesForGC(){for(ZonesIterzone(rt_,WithAtoms);!zone.done();zone.next())zone->unscheduleGC();}};/* * An invariant of our GC/CC interaction is that there must not ever be any * black to gray edges in the system. It is possible to violate this with * simple compartmental GC. For example, in GC[n], we collect in both * compartmentA and compartmentB, and mark both sides of the cross-compartment * edge gray. Later in GC[n+1], we only collect compartmentA, but this time * mark it black. Now we are violating the invariants and must fix it somehow. * * To prevent this situation, we explicitly detect the black->gray state when * marking cross-compartment edges -- see ShouldMarkCrossCompartment -- adding * each violating edges to foundBlackGrayEdges. After we leave the trace * session for each GC slice, we "ExposeToActiveJS" on each of these edges * (which we cannot do safely from the guts of the GC). */classAutoExposeLiveCrossZoneEdges{BlackGrayEdgeVector*edges;public:explicitAutoExposeLiveCrossZoneEdges(BlackGrayEdgeVector*edgesPtr):edges(edgesPtr){MOZ_ASSERT(edges->empty());}~AutoExposeLiveCrossZoneEdges(){for(auto&target:*edges){MOZ_ASSERT(target);MOZ_ASSERT(!target->zone()->isCollecting());UnmarkGrayCellRecursively(target,target->getTraceKind());}edges->clear();}};}/* anonymous namespace *//* * Run one GC "cycle" (either a slice of incremental GC or an entire * non-incremental GC. We disable inlining to ensure that the bottom of the * stack with possible GC roots recorded in MarkRuntime excludes any pointers we * use during the marking implementation. * * Returns true if we "reset" an existing incremental GC, which would force us * to run another cycle. */MOZ_NEVER_INLINEGCRuntime::IncrementalResultGCRuntime::gcCycle(boolnonincrementalByAPI,SliceBudget&budget,JS::gcreason::Reasonreason){// Note that the following is allowed to re-enter GC in the finalizer.AutoNotifyGCActivitynotify(*this);gcstats::AutoGCSliceagc(stats(),scanZonesBeforeGC(),invocationKind,budget,reason);AutoExposeLiveCrossZoneEdgesaelcze(&foundBlackGrayEdges.ref());EvictAllNurseries(rt,reason);AutoTraceSessionsession(rt,JS::HeapState::MajorCollecting);majorGCTriggerReason=JS::gcreason::NO_REASON;interFrameGC=true;number++;if(!isIncrementalGCInProgress())incMajorGcNumber();// It's ok if threads other than the active thread have suppressGC set, as// they are operating on zones which will not be collected from here.MOZ_ASSERT(!TlsContext.get()->suppressGC);// Assert if this is a GC unsafe region.TlsContext.get()->verifyIsSafeToGC();{gcstats::AutoPhaseap(stats(),gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);// Background finalization and decommit are finished by defininition// before we can start a new GC session.if(!isIncrementalGCInProgress()){assertBackgroundSweepingFinished();MOZ_ASSERT(!decommitTask.isRunning());}// We must also wait for background allocation to finish so we can// avoid taking the GC lock when manipulating the chunks during the GC.// The background alloc task can run between slices, so we must wait// for it at the start of every slice.allocTask.cancel(GCParallelTask::CancelAndWait);}// We don't allow off-thread parsing to start while we're doing an// incremental GC.MOZ_ASSERT_IF(rt->activeGCInAtomsZone(),!rt->hasHelperThreadZones());autoresult=budgetIncrementalGC(nonincrementalByAPI,reason,budget,session.lock);// If an ongoing incremental GC was reset, we may need to restart.if(result==IncrementalResult::Reset){MOZ_ASSERT(!isIncrementalGCInProgress());returnresult;}TraceMajorGCStart();incrementalCollectSlice(budget,reason,session.lock);chunkAllocationSinceLastGC=false;#ifdef JS_GC_ZEAL/* Keeping these around after a GC is dangerous. */clearSelectedForMarking();#endif/* Clear gcMallocBytes for all zones. */for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next())zone->resetAllMallocBytes();resetMallocBytes();TraceMajorGCEnd();returnIncrementalResult::Ok;}#ifdef JS_GC_ZEALstaticboolIsDeterministicGCReason(JS::gcreason::Reasonreason){switch(reason){caseJS::gcreason::API:caseJS::gcreason::DESTROY_RUNTIME:caseJS::gcreason::LAST_DITCH:caseJS::gcreason::TOO_MUCH_MALLOC:caseJS::gcreason::ALLOC_TRIGGER:caseJS::gcreason::DEBUG_GC:caseJS::gcreason::CC_FORCED:caseJS::gcreason::SHUTDOWN_CC:caseJS::gcreason::ABORT_GC:returntrue;default:returnfalse;}}#endifgcstats::ZoneGCStatsGCRuntime::scanZonesBeforeGC(){gcstats::ZoneGCStatszoneStats;for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){zoneStats.zoneCount++;if(zone->isGCScheduled()){zoneStats.collectedZoneCount++;zoneStats.collectedCompartmentCount+=zone->compartments().length();}}for(CompartmentsIterc(rt,WithAtoms);!c.done();c.next())zoneStats.compartmentCount++;returnzoneStats;}// The GC can only clean up scheduledForDestruction compartments that were// marked live by a barrier (e.g. by RemapWrappers from a navigation event).// It is also common to have compartments held live because they are part of a// cycle in gecko, e.g. involving the HTMLDocument wrapper. In this case, we// need to run the CycleCollector in order to remove these edges before the// compartment can be freed.voidGCRuntime::maybeDoCycleCollection(){conststaticdoubleExcessiveGrayCompartments=0.8;conststaticsize_tLimitGrayCompartments=200;size_tcompartmentsTotal=0;size_tcompartmentsGray=0;for(CompartmentsIterc(rt,SkipAtoms);!c.done();c.next()){++compartmentsTotal;GlobalObject*global=c->unsafeUnbarrieredMaybeGlobal();if(global&&global->asTenured().isMarked(GRAY))++compartmentsGray;}doublegrayFraction=double(compartmentsGray)/double(compartmentsTotal);if(grayFraction>ExcessiveGrayCompartments||compartmentsGray>LimitGrayCompartments)callDoCycleCollectionCallback(rt->activeContextFromOwnThread());}voidGCRuntime::checkCanCallAPI(){MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt));/* If we attempt to invoke the GC while we are running in the GC, assert. */MOZ_RELEASE_ASSERT(!JS::CurrentThreadIsHeapBusy());MOZ_ASSERT(TlsContext.get()->isAllocAllowed());}boolGCRuntime::checkIfGCAllowedInCurrentState(JS::gcreason::Reasonreason){if(TlsContext.get()->suppressGC)returnfalse;// Only allow shutdown GCs when we're destroying the runtime. This keeps// the GC callback from triggering a nested GC and resetting global state.if(rt->isBeingDestroyed()&&!IsShutdownGC(reason))returnfalse;#ifdef JS_GC_ZEALif(deterministicOnly&&!IsDeterministicGCReason(reason))returnfalse;#endifreturntrue;}boolGCRuntime::shouldRepeatForDeadZone(JS::gcreason::Reasonreason){MOZ_ASSERT_IF(reason==JS::gcreason::COMPARTMENT_REVIVED,!isIncremental);if(!isIncremental||isIncrementalGCInProgress())returnfalse;for(CompartmentsIterc(rt,SkipAtoms);!c.done();c.next()){if(c->scheduledForDestruction)returntrue;}returnfalse;}voidGCRuntime::collect(boolnonincrementalByAPI,SliceBudgetbudget,JS::gcreason::Reasonreason){// Checks run for each request, even if we do not actually GC.checkCanCallAPI();// Check if we are allowed to GC at this time before proceeding.if(!checkIfGCAllowedInCurrentState(reason))return;AutoTraceLoglogGC(TraceLoggerForCurrentThread(),TraceLogger_GC);AutoStopVerifyingBarriersav(rt,IsShutdownGC(reason));AutoEnqueuePendingParseTasksAfterGCaept(*this);AutoScheduleZonesForGCasz(rt);boolrepeat=false;do{poked=false;boolwasReset=gcCycle(nonincrementalByAPI,budget,reason)==IncrementalResult::Reset;if(reason==JS::gcreason::ABORT_GC){MOZ_ASSERT(!isIncrementalGCInProgress());break;}boolrepeatForDeadZone=false;if(poked&&cleanUpEverything){/* Need to re-schedule all zones for GC. */JS::PrepareForFullGC(rt->activeContextFromOwnThread());}elseif(shouldRepeatForDeadZone(reason)&&!wasReset){/* * This code makes an extra effort to collect compartments that we * thought were dead at the start of the GC. See the large comment * in beginMarkPhase. */repeatForDeadZone=true;reason=JS::gcreason::COMPARTMENT_REVIVED;}/* * If we reset an existing GC, we need to start a new one. Also, we * repeat GCs that happen during shutdown (the gcShouldCleanUpEverything * case) until we can be sure that no additional garbage is created * (which typically happens if roots are dropped during finalizers). */repeat=(poked&&cleanUpEverything)||wasReset||repeatForDeadZone;}while(repeat);if(reason==JS::gcreason::COMPARTMENT_REVIVED)maybeDoCycleCollection();#ifdef JS_GC_ZEALif(rt->hasZealMode(ZealMode::CheckHeapAfterGC)){gcstats::AutoPhaseap(rt->gc.stats(),gcstats::PhaseKind::TRACE_HEAP);CheckHeapAfterGC(rt);}#endif}js::AutoEnqueuePendingParseTasksAfterGC::~AutoEnqueuePendingParseTasksAfterGC(){if(!OffThreadParsingMustWaitForGC(gc_.rt))EnqueuePendingParseTasksAfterGC(gc_.rt);}SliceBudgetGCRuntime::defaultBudget(JS::gcreason::Reasonreason,int64_tmillis){if(millis==0){if(reason==JS::gcreason::ALLOC_TRIGGER)millis=defaultSliceBudget();elseif(schedulingState.inHighFrequencyGCMode()&&tunables.isDynamicMarkSliceEnabled())millis=defaultSliceBudget()*IGC_MARK_SLICE_MULTIPLIER;elsemillis=defaultSliceBudget();}returnSliceBudget(TimeBudget(millis));}voidGCRuntime::gc(JSGCInvocationKindgckind,JS::gcreason::Reasonreason){invocationKind=gckind;collect(true,SliceBudget::unlimited(),reason);}voidGCRuntime::startGC(JSGCInvocationKindgckind,JS::gcreason::Reasonreason,int64_tmillis){MOZ_ASSERT(!isIncrementalGCInProgress());if(!JS::IsIncrementalGCEnabled(TlsContext.get())){gc(gckind,reason);return;}invocationKind=gckind;collect(false,defaultBudget(reason,millis),reason);}voidGCRuntime::gcSlice(JS::gcreason::Reasonreason,int64_tmillis){MOZ_ASSERT(isIncrementalGCInProgress());collect(false,defaultBudget(reason,millis),reason);}voidGCRuntime::finishGC(JS::gcreason::Reasonreason){MOZ_ASSERT(isIncrementalGCInProgress());// If we're not collecting because we're out of memory then skip the// compacting phase if we need to finish an ongoing incremental GC// non-incrementally to avoid janking the browser.if(!IsOOMReason(initialReason)){if(incrementalState==State::Compact){abortGC();return;}isCompacting=false;}collect(false,SliceBudget::unlimited(),reason);}voidGCRuntime::abortGC(){MOZ_ASSERT(isIncrementalGCInProgress());checkCanCallAPI();MOZ_ASSERT(!TlsContext.get()->suppressGC);collect(false,SliceBudget::unlimited(),JS::gcreason::ABORT_GC);}voidGCRuntime::notifyDidPaint(){MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));#ifdef JS_GC_ZEALif(hasZealMode(ZealMode::FrameVerifierPre))verifyPreBarriers();if(hasZealMode(ZealMode::FrameGC)){JS::PrepareForFullGC(rt->activeContextFromOwnThread());gc(GC_NORMAL,JS::gcreason::REFRESH_FRAME);return;}#endifif(isIncrementalGCInProgress()&&!interFrameGC&&tunables.areRefreshFrameSlicesEnabled()){JS::PrepareForIncrementalGC(rt->activeContextFromOwnThread());gcSlice(JS::gcreason::REFRESH_FRAME);}interFrameGC=false;}staticboolZonesSelected(JSRuntime*rt){for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next()){if(zone->isGCScheduled())returntrue;}returnfalse;}voidGCRuntime::startDebugGC(JSGCInvocationKindgckind,SliceBudget&budget){MOZ_ASSERT(!isIncrementalGCInProgress());if(!ZonesSelected(rt))JS::PrepareForFullGC(rt->activeContextFromOwnThread());invocationKind=gckind;collect(false,budget,JS::gcreason::DEBUG_GC);}voidGCRuntime::debugGCSlice(SliceBudget&budget){MOZ_ASSERT(isIncrementalGCInProgress());if(!ZonesSelected(rt))JS::PrepareForIncrementalGC(rt->activeContextFromOwnThread());collect(false,budget,JS::gcreason::DEBUG_GC);}/* Schedule a full GC unless a zone will already be collected. */voidjs::PrepareForDebugGC(JSRuntime*rt){if(!ZonesSelected(rt))JS::PrepareForFullGC(rt->activeContextFromOwnThread());}voidGCRuntime::onOutOfMallocMemory(){// Stop allocating new chunks.allocTask.cancel(GCParallelTask::CancelAndWait);// Make sure we release anything queued for release.decommitTask.join();// Wait for background free of nursery huge slots to finish.for(ZoneGroupsItergroup(rt);!group.done();group.next())group->nursery().waitBackgroundFreeEnd();AutoLockGClock(rt);onOutOfMallocMemory(lock);}voidGCRuntime::onOutOfMallocMemory(constAutoLockGC&lock){// Release any relocated arenas we may be holding on to, without releasing// the GC lock.releaseHeldRelocatedArenasWithoutUnlocking(lock);// Throw away any excess chunks we have lying around.freeEmptyChunks(rt,lock);// Immediately decommit as many arenas as possible in the hopes that this// might let the OS scrape together enough pages to satisfy the failing// malloc request.decommitAllWithoutUnlocking(lock);}voidGCRuntime::minorGC(JS::gcreason::Reasonreason,gcstats::PhaseKindphase){MOZ_ASSERT(!JS::CurrentThreadIsHeapBusy());if(TlsContext.get()->suppressGC)return;gcstats::AutoPhaseap(rt->gc.stats(),phase);nursery().clearMinorGCRequest();TraceLoggerThread*logger=TraceLoggerForCurrentThread();AutoTraceLoglogMinorGC(logger,TraceLogger_MinorGC);nursery().collect(reason);MOZ_ASSERT(nursery().isEmpty());blocksToFreeAfterMinorGC.ref().freeAll();#ifdef JS_GC_ZEALif(rt->hasZealMode(ZealMode::CheckHeapAfterGC))CheckHeapAfterGC(rt);#endif{AutoLockGClock(rt);for(ZonesIterzone(rt,WithAtoms);!zone.done();zone.next())maybeAllocTriggerZoneGC(zone,lock);}}JS::AutoDisableGenerationalGC::AutoDisableGenerationalGC(JSContext*cx):cx(cx){if(!cx->generationalDisabled){cx->runtime()->gc.evictNursery(JS::gcreason::API);cx->nursery().disable();}++cx->generationalDisabled;}JS::AutoDisableGenerationalGC::~AutoDisableGenerationalGC(){if(--cx->generationalDisabled==0){for(ZoneGroupsItergroup(cx->runtime());!group.done();group.next())group->nursery().enable();}}JS_PUBLIC_API(bool)JS::IsGenerationalGCEnabled(JSRuntime*rt){return!TlsContext.get()->generationalDisabled;}boolGCRuntime::gcIfRequested(){// This method returns whether a major GC was performed.if(nursery().minorGCRequested())minorGC(nursery().minorGCTriggerReason());if(majorGCRequested()){if(!isIncrementalGCInProgress())startGC(GC_NORMAL,majorGCTriggerReason);elsegcSlice(majorGCTriggerReason);returntrue;}returnfalse;}voidjs::gc::FinishGC(JSContext*cx){if(JS::IsIncrementalGCInProgress(cx)){JS::PrepareForIncrementalGC(cx);JS::FinishIncrementalGC(cx,JS::gcreason::API);}for(ZoneGroupsItergroup(cx->runtime());!group.done();group.next())group->nursery().waitBackgroundFreeEnd();}AutoPrepareForTracing::AutoPrepareForTracing(JSContext*cx,ZoneSelectorselector){js::gc::FinishGC(cx);session_.emplace(cx->runtime());}JSCompartment*js::NewCompartment(JSContext*cx,JSPrincipals*principals,constJS::CompartmentOptions&options){JSRuntime*rt=cx->runtime();JS_AbortIfWrongThread(cx);ScopedJSDeletePtr<ZoneGroup>groupHolder;ScopedJSDeletePtr<Zone>zoneHolder;Zone*zone=nullptr;ZoneGroup*group=nullptr;JS::ZoneSpecifierzoneSpec=options.creationOptions().zoneSpecifier();switch(zoneSpec){caseJS::SystemZone:// systemZone and possibly systemZoneGroup might be null here, in which// case we'll make a zone/group and set these fields below.zone=rt->gc.systemZone;group=rt->gc.systemZoneGroup;break;caseJS::ExistingZone:zone=static_cast<Zone*>(options.creationOptions().zonePointer());MOZ_ASSERT(zone);group=zone->group();break;caseJS::NewZoneInNewZoneGroup:break;caseJS::NewZoneInSystemZoneGroup:// As above, systemZoneGroup might be null here.group=rt->gc.systemZoneGroup;break;caseJS::NewZoneInExistingZoneGroup:group=static_cast<ZoneGroup*>(options.creationOptions().zonePointer());MOZ_ASSERT(group);break;}if(group){// Take over ownership of the group while we create the compartment/zone.group->enter(cx);}else{MOZ_ASSERT(!zone);group=cx->new_<ZoneGroup>(rt);if(!group)returnnullptr;groupHolder.reset(group);if(!group->init()){ReportOutOfMemory(cx);returnnullptr;}if(cx->generationalDisabled)group->nursery().disable();}if(!zone){zone=cx->new_<Zone>(cx->runtime(),group);if(!zone)returnnullptr;zoneHolder.reset(zone);constJSPrincipals*trusted=rt->trustedPrincipals();boolisSystem=principals&&principals==trusted;if(!zone->init(isSystem)){ReportOutOfMemory(cx);returnnullptr;}}ScopedJSDeletePtr<JSCompartment>compartment(cx->new_<JSCompartment>(zone,options));if(!compartment||!compartment->init(cx))returnnullptr;// Set up the principals.JS_SetCompartmentPrincipals(compartment,principals);AutoLockGClock(rt);if(!zone->compartments().append(compartment.get())){ReportOutOfMemory(cx);returnnullptr;}if(zoneHolder){if(!group->zones().append(zone)){ReportOutOfMemory(cx);returnnullptr;}// Lazily set the runtime's sytem zone.if(zoneSpec==JS::SystemZone){MOZ_RELEASE_ASSERT(!rt->gc.systemZone);rt->gc.systemZone=zone;zone->isSystem=true;}}if(groupHolder){if(!rt->gc.groups.ref().append(group)){ReportOutOfMemory(cx);returnnullptr;}// Lazily set the runtime's system zone group.if(zoneSpec==JS::SystemZone||zoneSpec==JS::NewZoneInSystemZoneGroup){MOZ_RELEASE_ASSERT(!rt->gc.systemZoneGroup);rt->gc.systemZoneGroup=group;group->setUseExclusiveLocking();}}zoneHolder.forget();groupHolder.forget();group->leave();returncompartment.forget();}voidgc::MergeCompartments(JSCompartment*source,JSCompartment*target){// The source compartment must be specifically flagged as mergable. This// also implies that the compartment is not visible to the debugger.MOZ_ASSERT(source->creationOptions_.mergeable());MOZ_ASSERT(source->creationOptions_.invisibleToDebugger());MOZ_ASSERT(source->creationOptions().addonIdOrNull()==target->creationOptions().addonIdOrNull());JSContext*cx=source->runtimeFromActiveCooperatingThread()->activeContextFromOwnThread();MOZ_ASSERT(!source->zone()->wasGCStarted());MOZ_ASSERT(!target->zone()->wasGCStarted());JS::AutoAssertNoGCnogc(cx);AutoTraceSessionsession(cx->runtime());// Cleanup tables and other state in the source compartment that will be// meaningless after merging into the target compartment.source->clearTables();source->zone()->clearTables();source->unsetIsDebuggee();// The delazification flag indicates the presence of LazyScripts in a// compartment for the Debugger API, so if the source compartment created// LazyScripts, the flag must be propagated to the target compartment.if(source->needsDelazificationForDebugger())target->scheduleDelazificationForDebugger();// Release any relocated arenas which we may be holding on to as they might// be in the source zonecx->runtime()->gc.releaseHeldRelocatedArenas();// Fixup compartment pointers in source to refer to target, and make sure// type information generations are in sync.for(autoscript=source->zone()->cellIter<JSScript>();!script.done();script.next()){MOZ_ASSERT(script->compartment()==source);script->compartment_=target;script->setTypesGeneration(target->zone()->types.generation);}for(autogroup=source->zone()->cellIter<ObjectGroup>();!group.done();group.next()){group->setGeneration(target->zone()->types.generation);group->compartment_=target;// Remove any unboxed layouts from the list in the off thread// compartment. These do not need to be reinserted in the target// compartment's list, as the list is not required to be complete.if(UnboxedLayout*layout=group->maybeUnboxedLayoutDontCheckGeneration())layout->detachFromCompartment();}// Fixup zone pointers in source's zone to refer to target's zone.for(autothingKind:AllAllocKinds()){for(ArenaIteraiter(source->zone(),thingKind);!aiter.done();aiter.next()){Arena*arena=aiter.get();arena->zone=target->zone();}}// The source should be the only compartment in its zone.for(CompartmentsInZoneIterc(source->zone());!c.done();c.next())MOZ_ASSERT(c.get()==source);// Merge the allocator, stats and UIDs in source's zone into target's zone.target->zone()->arenas.adoptArenas(cx->runtime(),&source->zone()->arenas);target->zone()->usage.adopt(source->zone()->usage);target->zone()->adoptUniqueIds(source->zone());// Merge other info in source's zone into target's zone.target->zone()->types.typeLifoAlloc().transferFrom(&source->zone()->types.typeLifoAlloc());// Atoms which are marked in source's zone are now marked in target's zone.cx->atomMarking().adoptMarkedAtoms(target->zone(),source->zone());}voidGCRuntime::runDebugGC(){#ifdef JS_GC_ZEALif(TlsContext.get()->suppressGC)return;if(hasZealMode(ZealMode::GenerationalGC))returnminorGC(JS::gcreason::DEBUG_GC);PrepareForDebugGC(rt);autobudget=SliceBudget::unlimited();if(hasZealMode(ZealMode::IncrementalRootsThenFinish)||hasZealMode(ZealMode::IncrementalMarkAllThenFinish)||hasZealMode(ZealMode::IncrementalMultipleSlices)||hasZealMode(ZealMode::IncrementalSweepThenFinish)){js::gc::StateinitialState=incrementalState;if(hasZealMode(ZealMode::IncrementalMultipleSlices)){/* * Start with a small slice limit and double it every slice. This * ensure that we get multiple slices, and collection runs to * completion. */if(!isIncrementalGCInProgress())incrementalLimit=zealFrequency/2;elseincrementalLimit*=2;budget=SliceBudget(WorkBudget(incrementalLimit));}else{// This triggers incremental GC but is actually ignored by IncrementalMarkSlice.budget=SliceBudget(WorkBudget(1));}if(!isIncrementalGCInProgress())invocationKind=GC_SHRINK;collect(false,budget,JS::gcreason::DEBUG_GC);/* * For multi-slice zeal, reset the slice size when we get to the sweep * or compact phases. */if(hasZealMode(ZealMode::IncrementalMultipleSlices)){if((initialState==State::Mark&&incrementalState==State::Sweep)||(initialState==State::Sweep&&incrementalState==State::Compact)){incrementalLimit=zealFrequency/2;}}}elseif(hasZealMode(ZealMode::Compact)){gc(GC_SHRINK,JS::gcreason::DEBUG_GC);}else{gc(GC_NORMAL,JS::gcreason::DEBUG_GC);}#endif}voidGCRuntime::setFullCompartmentChecks(boolenabled){MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());fullCompartmentChecks=enabled;}#ifdef JS_GC_ZEALboolGCRuntime::selectForMarking(JSObject*object){MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());returnselectedForMarking.ref().append(object);}voidGCRuntime::clearSelectedForMarking(){selectedForMarking.ref().clearAndFree();}voidGCRuntime::setDeterministic(boolenabled){MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());deterministicOnly=enabled;}#endif#ifdef DEBUG/* Should only be called manually under gdb */voidPreventGCDuringInteractiveDebug(){TlsContext.get()->suppressGC++;}#endifvoidjs::ReleaseAllJITCode(FreeOp*fop,booladdMarkers){js::CancelOffThreadIonCompile(fop->runtime());JSRuntime::AutoProhibitActiveContextChangeapacc(fop->runtime());for(ZonesIterzone(fop->runtime(),SkipAtoms);!zone.done();zone.next()){zone->setPreservingCode(false);zone->discardJitCode(fop,/* discardBaselineCode = */true,addMarkers);}}voidArenaLists::normalizeBackgroundFinalizeState(AllocKindthingKind){ArenaLists::BackgroundFinalizeState*bfs=&backgroundFinalizeState(thingKind);switch(*bfs){caseBFS_DONE:break;default:MOZ_ASSERT_UNREACHABLE("Background finalization in progress, but it should not be.");break;}}voidArenaLists::adoptArenas(JSRuntime*rt,ArenaLists*fromArenaLists){// GC should be inactive, but still take the lock as a kind of read fence.AutoLockGClock(rt);fromArenaLists->purge();for(autothingKind:AllAllocKinds()){// When we enter a parallel section, we join the background// thread, and we do not run GC while in the parallel section,// so no finalizer should be active!normalizeBackgroundFinalizeState(thingKind);fromArenaLists->normalizeBackgroundFinalizeState(thingKind);ArenaList*fromList=&fromArenaLists->arenaLists(thingKind);ArenaList*toList=&arenaLists(thingKind);fromList->check();toList->check();Arena*next;for(Arena*fromArena=fromList->head();fromArena;fromArena=next){// Copy fromArena->next before releasing/reinserting.next=fromArena->next;MOZ_ASSERT(!fromArena->isEmpty());toList->insertAtCursor(fromArena);}fromList->clear();toList->check();}}boolArenaLists::containsArena(JSRuntime*rt,Arena*needle){AutoLockGClock(rt);ArenaList&list=arenaLists(needle->getAllocKind());for(Arena*arena=list.head();arena;arena=arena->next){if(arena==needle)returntrue;}returnfalse;}AutoSuppressGC::AutoSuppressGC(JSContext*cx):suppressGC_(cx->suppressGC.ref()){suppressGC_++;}booljs::UninlinedIsInsideNursery(constgc::Cell*cell){returnIsInsideNursery(cell);}#ifdef DEBUGAutoDisableProxyCheck::AutoDisableProxyCheck(){TlsContext.get()->disableStrictProxyChecking();}AutoDisableProxyCheck::~AutoDisableProxyCheck(){TlsContext.get()->enableStrictProxyChecking();}JS_FRIEND_API(void)JS::AssertGCThingMustBeTenured(JSObject*obj){MOZ_ASSERT(obj->isTenured()&&(!IsNurseryAllocable(obj->asTenured().getAllocKind())||obj->getClass()->hasFinalize()));}JS_FRIEND_API(void)JS::AssertGCThingIsNotAnObjectSubclass(Cell*cell){MOZ_ASSERT(cell);MOZ_ASSERT(cell->getTraceKind()!=JS::TraceKind::Object);}JS_FRIEND_API(void)js::gc::AssertGCThingHasType(js::gc::Cell*cell,JS::TraceKindkind){if(!cell)MOZ_ASSERT(kind==JS::TraceKind::Null);elseif(IsInsideNursery(cell))MOZ_ASSERT(kind==JS::TraceKind::Object);elseMOZ_ASSERT(MapAllocToTraceKind(cell->asTenured().getAllocKind())==kind);}#endifJS::AutoAssertNoGC::AutoAssertNoGC(JSContext*maybecx):cx_(maybecx?maybecx:TlsContext.get()){cx_->inUnsafeRegion++;}JS::AutoAssertNoGC::~AutoAssertNoGC(){MOZ_ASSERT(cx_->inUnsafeRegion>0);cx_->inUnsafeRegion--;}#ifdef DEBUGJS::AutoAssertNoAlloc::AutoAssertNoAlloc(JSContext*cx):gc(nullptr){disallowAlloc(cx->runtime());}voidJS::AutoAssertNoAlloc::disallowAlloc(JSRuntime*rt){MOZ_ASSERT(!gc);gc=&rt->gc;TlsContext.get()->disallowAlloc();}JS::AutoAssertNoAlloc::~AutoAssertNoAlloc(){if(gc)TlsContext.get()->allowAlloc();}AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc(){TlsContext.get()->disallowNurseryAlloc();}AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc(){TlsContext.get()->allowNurseryAlloc();}JS::AutoEnterCycleCollection::AutoEnterCycleCollection(JSRuntime*rt){MOZ_ASSERT(!JS::CurrentThreadIsHeapBusy());TlsContext.get()->heapState=HeapState::CycleCollecting;}JS::AutoEnterCycleCollection::~AutoEnterCycleCollection(){MOZ_ASSERT(JS::CurrentThreadIsHeapCycleCollecting());TlsContext.get()->heapState=HeapState::Idle;}JS::AutoAssertGCCallback::AutoAssertGCCallback():AutoSuppressGCAnalysis(){MOZ_ASSERT(JS::CurrentThreadIsHeapCollecting());}#endifJS_FRIEND_API(constchar*)JS::GCTraceKindToAscii(JS::TraceKindkind){switch(kind){#define MAP_NAME(name, _0, _1) case JS::TraceKind::name: return #name;JS_FOR_EACH_TRACEKIND(MAP_NAME);#undef MAP_NAMEdefault:return"Invalid";}}JS::GCCellPtr::GCCellPtr(constValue&v):ptr(0){if(v.isString())ptr=checkedCast(v.toString(),JS::TraceKind::String);elseif(v.isObject())ptr=checkedCast(&v.toObject(),JS::TraceKind::Object);elseif(v.isSymbol())ptr=checkedCast(v.toSymbol(),JS::TraceKind::Symbol);elseif(v.isPrivateGCThing())ptr=checkedCast(v.toGCThing(),v.toGCThing()->getTraceKind());elseptr=checkedCast(nullptr,JS::TraceKind::Null);}JS::TraceKindJS::GCCellPtr::outOfLineKind()const{MOZ_ASSERT((ptr&OutOfLineTraceKindMask)==OutOfLineTraceKindMask);MOZ_ASSERT(asCell()->isTenured());returnMapAllocToTraceKind(asCell()->asTenured().getAllocKind());}boolJS::GCCellPtr::mayBeOwnedByOtherRuntimeSlow()const{if(is<JSString>())returnas<JSString>().isPermanentAtom();returnas<Symbol>().isWellKnownSymbol();}#ifdef JSGC_HASH_TABLE_CHECKSvoidjs::gc::CheckHashTablesAfterMovingGC(JSRuntime*rt){/* * Check that internal hash tables no longer have any pointers to things * that have been moved. */rt->geckoProfiler().checkStringsMapAfterMovingGC();for(ZonesIterzone(rt,SkipAtoms);!zone.done();zone.next()){zone->checkUniqueIdTableAfterMovingGC();zone->checkInitialShapesTableAfterMovingGC();zone->checkBaseShapeTableAfterMovingGC();JS::AutoCheckCannotGCnogc;for(autobaseShape=zone->cellIter<BaseShape>();!baseShape.done();baseShape.next()){if(ShapeTable*table=baseShape->maybeTable(nogc))table->checkAfterMovingGC();}}for(CompartmentsIterc(rt,SkipAtoms);!c.done();c.next()){c->objectGroups.checkTablesAfterMovingGC();c->dtoaCache.checkCacheAfterMovingGC();c->checkWrapperMapAfterMovingGC();c->checkScriptMapsAfterMovingGC();if(c->debugEnvs)c->debugEnvs->checkHashTablesAfterMovingGC(rt);}}#endifJS_PUBLIC_API(void)JS::PrepareZoneForGC(Zone*zone){zone->scheduleGC();}JS_PUBLIC_API(void)JS::PrepareForFullGC(JSContext*cx){for(ZonesIterzone(cx->runtime(),WithAtoms);!zone.done();zone.next())zone->scheduleGC();}JS_PUBLIC_API(void)JS::PrepareForIncrementalGC(JSContext*cx){if(!JS::IsIncrementalGCInProgress(cx))return;for(ZonesIterzone(cx->runtime(),WithAtoms);!zone.done();zone.next()){if(zone->wasGCStarted())PrepareZoneForGC(zone);}}JS_PUBLIC_API(bool)JS::IsGCScheduled(JSContext*cx){for(ZonesIterzone(cx->runtime(),WithAtoms);!zone.done();zone.next()){if(zone->isGCScheduled())returntrue;}returnfalse;}JS_PUBLIC_API(void)JS::SkipZoneForGC(Zone*zone){zone->unscheduleGC();}JS_PUBLIC_API(void)JS::GCForReason(JSContext*cx,JSGCInvocationKindgckind,gcreason::Reasonreason){MOZ_ASSERT(gckind==GC_NORMAL||gckind==GC_SHRINK);cx->runtime()->gc.gc(gckind,reason);}JS_PUBLIC_API(void)JS::StartIncrementalGC(JSContext*cx,JSGCInvocationKindgckind,gcreason::Reasonreason,int64_tmillis){MOZ_ASSERT(gckind==GC_NORMAL||gckind==GC_SHRINK);cx->runtime()->gc.startGC(gckind,reason,millis);}JS_PUBLIC_API(void)JS::IncrementalGCSlice(JSContext*cx,gcreason::Reasonreason,int64_tmillis){cx->runtime()->gc.gcSlice(reason,millis);}JS_PUBLIC_API(void)JS::FinishIncrementalGC(JSContext*cx,gcreason::Reasonreason){cx->runtime()->gc.finishGC(reason);}JS_PUBLIC_API(void)JS::AbortIncrementalGC(JSContext*cx){if(IsIncrementalGCInProgress(cx))cx->runtime()->gc.abortGC();}char16_t*JS::GCDescription::formatSliceMessage(JSContext*cx)const{UniqueCharscstr=cx->runtime()->gc.stats().formatCompactSliceMessage();size_tnchars=strlen(cstr.get());UniqueTwoByteCharsout(js_pod_malloc<char16_t>(nchars+1));if(!out)returnnullptr;out.get()[nchars]=0;CopyAndInflateChars(out.get(),cstr.get(),nchars);returnout.release();}char16_t*JS::GCDescription::formatSummaryMessage(JSContext*cx)const{UniqueCharscstr=cx->runtime()->gc.stats().formatCompactSummaryMessage();size_tnchars=strlen(cstr.get());UniqueTwoByteCharsout(js_pod_malloc<char16_t>(nchars+1));if(!out)returnnullptr;out.get()[nchars]=0;CopyAndInflateChars(out.get(),cstr.get(),nchars);returnout.release();}JS::dbg::GarbageCollectionEvent::PtrJS::GCDescription::toGCEvent(JSContext*cx)const{returnJS::dbg::GarbageCollectionEvent::Create(cx->runtime(),cx->runtime()->gc.stats(),cx->runtime()->gc.majorGCCount());}char16_t*JS::GCDescription::formatJSON(JSContext*cx,uint64_ttimestamp)const{UniqueCharscstr=cx->runtime()->gc.stats().renderJsonMessage(timestamp);size_tnchars=strlen(cstr.get());UniqueTwoByteCharsout(js_pod_malloc<char16_t>(nchars+1));if(!out)returnnullptr;out.get()[nchars]=0;CopyAndInflateChars(out.get(),cstr.get(),nchars);returnout.release();}TimeStampJS::GCDescription::startTime(JSContext*cx)const{returncx->runtime()->gc.stats().start();}TimeStampJS::GCDescription::endTime(JSContext*cx)const{returncx->runtime()->gc.stats().end();}TimeStampJS::GCDescription::lastSliceStart(JSContext*cx)const{returncx->runtime()->gc.stats().slices().back().start;}TimeStampJS::GCDescription::lastSliceEnd(JSContext*cx)const{returncx->runtime()->gc.stats().slices().back().end;}JS::UniqueCharsJS::GCDescription::sliceToJSON(JSContext*cx)const{size_tslices=cx->runtime()->gc.stats().slices().length();MOZ_ASSERT(slices>0);returncx->runtime()->gc.stats().renderJsonSlice(slices-1);}JS::UniqueCharsJS::GCDescription::summaryToJSON(JSContext*cx)const{returncx->runtime()->gc.stats().renderJsonMessage(0,false);}JS_PUBLIC_API(JS::UniqueChars)JS::MinorGcToJSON(JSContext*cx){JSRuntime*rt=cx->runtime();returnrt->gc.stats().renderNurseryJson(rt);}JS_PUBLIC_API(JS::GCSliceCallback)JS::SetGCSliceCallback(JSContext*cx,GCSliceCallbackcallback){returncx->runtime()->gc.setSliceCallback(callback);}JS_PUBLIC_API(JS::DoCycleCollectionCallback)JS::SetDoCycleCollectionCallback(JSContext*cx,JS::DoCycleCollectionCallbackcallback){returncx->runtime()->gc.setDoCycleCollectionCallback(callback);}JS_PUBLIC_API(JS::GCNurseryCollectionCallback)JS::SetGCNurseryCollectionCallback(JSContext*cx,GCNurseryCollectionCallbackcallback){returncx->runtime()->gc.setNurseryCollectionCallback(callback);}JS_PUBLIC_API(void)JS::DisableIncrementalGC(JSContext*cx){cx->runtime()->gc.disallowIncrementalGC();}JS_PUBLIC_API(bool)JS::IsIncrementalGCEnabled(JSContext*cx){returncx->runtime()->gc.isIncrementalGCEnabled();}JS_PUBLIC_API(bool)JS::IsIncrementalGCInProgress(JSContext*cx){returncx->runtime()->gc.isIncrementalGCInProgress()&&!cx->runtime()->gc.isVerifyPreBarriersEnabled();}JS_PUBLIC_API(bool)JS::IsIncrementalGCInProgress(JSRuntime*rt){returnrt->gc.isIncrementalGCInProgress()&&!rt->gc.isVerifyPreBarriersEnabled();}JS_PUBLIC_API(bool)JS::IsIncrementalBarrierNeeded(JSContext*cx){if(JS::CurrentThreadIsHeapBusy())returnfalse;autostate=cx->runtime()->gc.state();returnstate!=gc::State::NotActive&&state<=gc::State::Sweep;}JS_PUBLIC_API(void)JS::IncrementalPreWriteBarrier(JSObject*obj){if(!obj)return;MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());JSObject::writeBarrierPre(obj);}structIncrementalReadBarrierFunctor{template<typenameT>voidoperator()(T*t){T::readBarrier(t);}};JS_PUBLIC_API(void)JS::IncrementalReadBarrier(GCCellPtrthing){if(!thing)return;MOZ_ASSERT(!JS::CurrentThreadIsHeapMajorCollecting());DispatchTyped(IncrementalReadBarrierFunctor(),thing);}JS_PUBLIC_API(bool)JS::WasIncrementalGC(JSRuntime*rt){returnrt->gc.isIncrementalGc();}uint64_tjs::gc::NextCellUniqueId(JSRuntime*rt){returnrt->gc.nextCellUniqueId();}namespacejs{namespacegc{namespaceMemInfo{staticboolGCBytesGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->runtime()->gc.usage.gcBytes()));returntrue;}staticboolGCMaxBytesGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->runtime()->gc.tunables.gcMaxBytes()));returntrue;}staticboolMallocBytesGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->runtime()->gc.getMallocBytes()));returntrue;}staticboolMaxMallocGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->runtime()->gc.maxMallocBytesAllocated()));returntrue;}staticboolGCHighFreqGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setBoolean(cx->runtime()->gc.schedulingState.inHighFrequencyGCMode());returntrue;}staticboolGCNumberGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->runtime()->gc.gcNumber()));returntrue;}staticboolMajorGCCountGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->runtime()->gc.majorGCCount()));returntrue;}staticboolMinorGCCountGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->runtime()->gc.minorGCCount()));returntrue;}staticboolZoneGCBytesGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->zone()->usage.gcBytes()));returntrue;}staticboolZoneGCTriggerBytesGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->zone()->threshold.gcTriggerBytes()));returntrue;}staticboolZoneGCAllocTriggerGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);boolhighFrequency=cx->runtime()->gc.schedulingState.inHighFrequencyGCMode();args.rval().setNumber(double(cx->zone()->threshold.allocTrigger(highFrequency)));returntrue;}staticboolZoneMallocBytesGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->zone()->GCMallocBytes()));returntrue;}staticboolZoneMaxMallocGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->zone()->GCMaxMallocBytes()));returntrue;}staticboolZoneGCDelayBytesGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->zone()->gcDelayBytes));returntrue;}staticboolZoneGCHeapGrowthFactorGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);AutoLockGClock(cx->runtime());args.rval().setNumber(cx->zone()->threshold.gcHeapGrowthFactor());returntrue;}staticboolZoneGCNumberGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setNumber(double(cx->zone()->gcNumber()));returntrue;}#ifdef JS_MORE_DETERMINISTICstaticboolDummyGetter(JSContext*cx,unsignedargc,Value*vp){CallArgsargs=CallArgsFromVp(argc,vp);args.rval().setUndefined();returntrue;}#endif}/* namespace MemInfo */JSObject*NewMemoryInfoObject(JSContext*cx){RootedObjectobj(cx,JS_NewObject(cx,nullptr));if(!obj)returnnullptr;usingnamespaceMemInfo;structNamedGetter{constchar*name;JSNativegetter;}getters[]={{"gcBytes",GCBytesGetter},{"gcMaxBytes",GCMaxBytesGetter},{"mallocBytesRemaining",MallocBytesGetter},{"maxMalloc",MaxMallocGetter},{"gcIsHighFrequencyMode",GCHighFreqGetter},{"gcNumber",GCNumberGetter},{"majorGCCount",MajorGCCountGetter},{"minorGCCount",MinorGCCountGetter}};for(autopair:getters){#ifdef JS_MORE_DETERMINISTICJSNativegetter=DummyGetter;#elseJSNativegetter=pair.getter;#endifif(!JS_DefineProperty(cx,obj,pair.name,UndefinedHandleValue,JSPROP_ENUMERATE|JSPROP_SHARED,getter,nullptr)){returnnullptr;}}RootedObjectzoneObj(cx,JS_NewObject(cx,nullptr));if(!zoneObj)returnnullptr;if(!JS_DefineProperty(cx,obj,"zone",zoneObj,JSPROP_ENUMERATE))returnnullptr;structNamedZoneGetter{constchar*name;JSNativegetter;}zoneGetters[]={{"gcBytes",ZoneGCBytesGetter},{"gcTriggerBytes",ZoneGCTriggerBytesGetter},{"gcAllocTrigger",ZoneGCAllocTriggerGetter},{"mallocBytesRemaining",ZoneMallocBytesGetter},{"maxMalloc",ZoneMaxMallocGetter},{"delayBytes",ZoneGCDelayBytesGetter},{"heapGrowthFactor",ZoneGCHeapGrowthFactorGetter},{"gcNumber",ZoneGCNumberGetter}};for(autopair:zoneGetters){#ifdef JS_MORE_DETERMINISTICJSNativegetter=DummyGetter;#elseJSNativegetter=pair.getter;#endifif(!JS_DefineProperty(cx,zoneObj,pair.name,UndefinedHandleValue,JSPROP_ENUMERATE|JSPROP_SHARED,getter,nullptr)){returnnullptr;}}returnobj;}constchar*StateName(Statestate){switch(state){#define MAKE_CASE(name) case State::name: return #name;GCSTATES(MAKE_CASE)#undef MAKE_CASE}MOZ_MAKE_COMPILER_ASSUME_IS_UNREACHABLE("invalide gc::State enum value");}voidAutoAssertHeapBusy::checkCondition(JSRuntime*rt){this->rt=rt;MOZ_ASSERT(JS::CurrentThreadIsHeapBusy());}voidAutoAssertEmptyNursery::checkCondition(JSContext*cx){if(!noAlloc)noAlloc.emplace();this->cx=cx;MOZ_ASSERT(AllNurseriesAreEmpty(cx->runtime()));}AutoEmptyNursery::AutoEmptyNursery(JSContext*cx):AutoAssertEmptyNursery(){MOZ_ASSERT(!cx->suppressGC);cx->runtime()->gc.stats().suspendPhases();EvictAllNurseries(cx->runtime(),JS::gcreason::EVICT_NURSERY);cx->runtime()->gc.stats().resumePhases();checkCondition(cx);}}/* namespace gc */}/* namespace js */#ifdef DEBUGvoidjs::gc::Cell::dump(FILE*fp)const{switch(getTraceKind()){caseJS::TraceKind::Object:reinterpret_cast<constJSObject*>(this)->dump(fp);break;caseJS::TraceKind::String:js::DumpString(reinterpret_cast<JSString*>(const_cast<Cell*>(this)),fp);break;caseJS::TraceKind::Shape:reinterpret_cast<constShape*>(this)->dump(fp);break;default:fprintf(fp,"%s(%p)\n",JS::GCTraceKindToAscii(getTraceKind()),(void*)this);}}// For use in a debugger.voidjs::gc::Cell::dump()const{dump(stderr);}#endifstaticinlineboolCanCheckGrayBits(constCell*cell){MOZ_ASSERT(cell);if(!cell->isTenured())returnfalse;autotc=&cell->asTenured();autort=tc->runtimeFromAnyThread();returnCurrentThreadCanAccessRuntime(rt)&&rt->gc.areGrayBitsValid();}JS_PUBLIC_API(bool)js::gc::detail::CellIsMarkedGrayIfKnown(constCell*cell){// We ignore the gray marking state of cells and return false in the// following cases://// 1) When OOM has caused us to clear the gcGrayBitsValid_ flag.//// 2) When we are in an incremental GC and examine a cell that is in a zone// that is not being collected. Gray targets of CCWs that are marked black// by a barrier will eventually be marked black in the next GC slice.//// 3) When we are not on the runtime's active thread. Helper threads might// call this while parsing, and they are not allowed to inspect the// runtime's incremental state. The objects being operated on are not able// to be collected and will not be marked any color.if(!CanCheckGrayBits(cell))returnfalse;autotc=&cell->asTenured();MOZ_ASSERT(!tc->zoneFromAnyThread()->usedByHelperThread());autort=tc->runtimeFromActiveCooperatingThread();if(rt->gc.isIncrementalGCInProgress()&&!tc->zone()->wasGCStarted())returnfalse;returndetail::CellIsMarkedGray(tc);}#ifdef DEBUGJS_PUBLIC_API(bool)js::gc::detail::CellIsNotGray(constCell*cell){// Check that a cell is not marked gray.//// Since this is a debug-only check, take account of the eventual mark state// of cells that will be marked black by the next GC slice in an incremental// GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown.// TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting// called while iterating the heap for memory reporting.MOZ_ASSERT(!JS::CurrentThreadIsHeapCollecting());MOZ_ASSERT(!JS::CurrentThreadIsHeapCycleCollecting());if(!CanCheckGrayBits(cell))returntrue;autotc=&cell->asTenured();if(!detail::CellIsMarkedGray(tc))returntrue;// The cell is gray, but may eventually be marked black if we are in an// incremental GC and the cell is reachable by something on the mark stack.autort=tc->runtimeFromAnyThread();if(!rt->gc.isIncrementalGCInProgress()||tc->zone()->wasGCStarted())returnfalse;Zone*sourceZone=rt->gc.marker.stackContainsCrossZonePointerTo(tc);if(sourceZone&&sourceZone->wasGCStarted())returntrue;returnfalse;}#endif